2026-01-09 00:00:07.565966 | Job console starting 2026-01-09 00:00:07.611292 | Updating git repos 2026-01-09 00:00:07.967017 | Cloning repos into workspace 2026-01-09 00:00:08.318222 | Restoring repo states 2026-01-09 00:00:08.341167 | Merging changes 2026-01-09 00:00:08.341186 | Checking out repos 2026-01-09 00:00:09.016864 | Preparing playbooks 2026-01-09 00:00:10.327730 | Running Ansible setup 2026-01-09 00:00:20.229670 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-09 00:00:22.074511 | 2026-01-09 00:00:22.074704 | PLAY [Base pre] 2026-01-09 00:00:22.188333 | 2026-01-09 00:00:22.188522 | TASK [Setup log path fact] 2026-01-09 00:00:22.259139 | orchestrator | ok 2026-01-09 00:00:22.360845 | 2026-01-09 00:00:22.361053 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-09 00:00:22.413809 | orchestrator | ok 2026-01-09 00:00:22.449113 | 2026-01-09 00:00:22.449343 | TASK [emit-job-header : Print job information] 2026-01-09 00:00:22.579156 | # Job Information 2026-01-09 00:00:22.579404 | Ansible Version: 2.16.14 2026-01-09 00:00:22.579443 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-09 00:00:22.579478 | Pipeline: periodic-midnight 2026-01-09 00:00:22.579503 | Executor: 521e9411259a 2026-01-09 00:00:22.579524 | Triggered by: https://github.com/osism/testbed 2026-01-09 00:00:22.579547 | Event ID: b398b6f6f6334cc58cb14fbf66d80ae4 2026-01-09 00:00:22.593832 | 2026-01-09 00:00:22.594012 | LOOP [emit-job-header : Print node information] 2026-01-09 00:00:23.073560 | orchestrator | ok: 2026-01-09 00:00:23.073856 | orchestrator | # Node Information 2026-01-09 00:00:23.073896 | orchestrator | Inventory Hostname: orchestrator 2026-01-09 00:00:23.073923 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-09 00:00:23.073946 | orchestrator | Username: zuul-testbed05 2026-01-09 00:00:23.073967 | orchestrator | Distro: Debian 12.12 2026-01-09 00:00:23.073992 | orchestrator | Provider: static-testbed 2026-01-09 00:00:23.074013 | orchestrator | Region: 2026-01-09 00:00:23.074035 | orchestrator | Label: testbed-orchestrator 2026-01-09 00:00:23.074056 | orchestrator | Product Name: OpenStack Nova 2026-01-09 00:00:23.074076 | orchestrator | Interface IP: 81.163.193.140 2026-01-09 00:00:23.093112 | 2026-01-09 00:00:23.093300 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-09 00:00:24.489266 | orchestrator -> localhost | changed 2026-01-09 00:00:24.497927 | 2026-01-09 00:00:24.498071 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-09 00:00:28.024636 | orchestrator -> localhost | changed 2026-01-09 00:00:28.065367 | 2026-01-09 00:00:28.065528 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-09 00:00:29.419390 | orchestrator -> localhost | ok 2026-01-09 00:00:29.430406 | 2026-01-09 00:00:29.430552 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-09 00:00:29.577937 | orchestrator | ok 2026-01-09 00:00:29.725044 | orchestrator | included: /var/lib/zuul/builds/80bff113f6db4f77b7b58d76c24d2a8f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-09 00:00:29.791951 | 2026-01-09 00:00:29.792096 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-09 00:00:35.962950 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-09 00:00:35.963294 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/80bff113f6db4f77b7b58d76c24d2a8f/work/80bff113f6db4f77b7b58d76c24d2a8f_id_rsa 2026-01-09 00:00:35.963343 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/80bff113f6db4f77b7b58d76c24d2a8f/work/80bff113f6db4f77b7b58d76c24d2a8f_id_rsa.pub 2026-01-09 00:00:35.963371 | orchestrator -> localhost | The key fingerprint is: 2026-01-09 00:00:35.963397 | orchestrator -> localhost | SHA256:JODJ/qk6z5UiiAxCqBColfBJz0yzAiQzsKRdoNjyuAU zuul-build-sshkey 2026-01-09 00:00:35.963421 | orchestrator -> localhost | The key's randomart image is: 2026-01-09 00:00:35.963456 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-09 00:00:35.963622 | orchestrator -> localhost | |%+oo= | 2026-01-09 00:00:35.963662 | orchestrator -> localhost | |B%oX = | 2026-01-09 00:00:35.963684 | orchestrator -> localhost | |Eo* O . . | 2026-01-09 00:00:35.963706 | orchestrator -> localhost | |== o o | 2026-01-09 00:00:35.963727 | orchestrator -> localhost | |= o . S | 2026-01-09 00:00:35.963755 | orchestrator -> localhost | |=+ . o | 2026-01-09 00:00:35.963776 | orchestrator -> localhost | |+.. . = | 2026-01-09 00:00:35.963796 | orchestrator -> localhost | | .o + | 2026-01-09 00:00:35.963817 | orchestrator -> localhost | | .++ | 2026-01-09 00:00:35.963838 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-09 00:00:35.963911 | orchestrator -> localhost | ok: Runtime: 0:00:03.438476 2026-01-09 00:00:35.979650 | 2026-01-09 00:00:35.982799 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-09 00:00:36.073130 | orchestrator | ok 2026-01-09 00:00:36.114524 | orchestrator | included: /var/lib/zuul/builds/80bff113f6db4f77b7b58d76c24d2a8f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-09 00:00:36.159041 | 2026-01-09 00:00:36.159231 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-09 00:00:36.225755 | orchestrator | skipping: Conditional result was False 2026-01-09 00:00:36.242282 | 2026-01-09 00:00:36.242431 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-09 00:00:37.786574 | orchestrator | changed 2026-01-09 00:00:37.799300 | 2026-01-09 00:00:37.799443 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-09 00:00:38.129423 | orchestrator | ok 2026-01-09 00:00:38.148591 | 2026-01-09 00:00:38.148738 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-09 00:00:38.669954 | orchestrator | ok 2026-01-09 00:00:38.684850 | 2026-01-09 00:00:38.692247 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-09 00:00:39.279801 | orchestrator | ok 2026-01-09 00:00:39.289220 | 2026-01-09 00:00:39.289511 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-09 00:00:39.361675 | orchestrator | skipping: Conditional result was False 2026-01-09 00:00:39.376029 | 2026-01-09 00:00:39.376218 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-09 00:00:41.723854 | orchestrator -> localhost | changed 2026-01-09 00:00:41.761920 | 2026-01-09 00:00:41.762070 | TASK [add-build-sshkey : Add back temp key] 2026-01-09 00:00:43.165170 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/80bff113f6db4f77b7b58d76c24d2a8f/work/80bff113f6db4f77b7b58d76c24d2a8f_id_rsa (zuul-build-sshkey) 2026-01-09 00:00:43.165552 | orchestrator -> localhost | ok: Runtime: 0:00:00.039931 2026-01-09 00:00:43.191547 | 2026-01-09 00:00:43.191756 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-09 00:00:44.036435 | orchestrator | ok 2026-01-09 00:00:44.048882 | 2026-01-09 00:00:44.049031 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-09 00:00:44.161775 | orchestrator | skipping: Conditional result was False 2026-01-09 00:00:44.391697 | 2026-01-09 00:00:44.391854 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-09 00:00:44.993962 | orchestrator | ok 2026-01-09 00:00:45.020246 | 2026-01-09 00:00:45.020401 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-09 00:00:45.118189 | orchestrator | ok 2026-01-09 00:00:45.161540 | 2026-01-09 00:00:45.161773 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-09 00:00:46.535990 | orchestrator -> localhost | ok 2026-01-09 00:00:46.544915 | 2026-01-09 00:00:46.545057 | TASK [validate-host : Collect information about the host] 2026-01-09 00:00:48.763688 | orchestrator | ok 2026-01-09 00:00:48.824297 | 2026-01-09 00:00:48.824525 | TASK [validate-host : Sanitize hostname] 2026-01-09 00:00:49.102002 | orchestrator | ok 2026-01-09 00:00:49.118117 | 2026-01-09 00:00:49.118353 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-09 00:00:52.151769 | orchestrator -> localhost | changed 2026-01-09 00:00:52.242924 | 2026-01-09 00:00:52.243131 | TASK [validate-host : Collect information about zuul worker] 2026-01-09 00:00:53.714617 | orchestrator | ok 2026-01-09 00:00:53.728401 | 2026-01-09 00:00:53.728550 | TASK [validate-host : Write out all zuul information for each host] 2026-01-09 00:00:56.760269 | orchestrator -> localhost | changed 2026-01-09 00:00:56.772840 | 2026-01-09 00:00:56.772976 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-09 00:00:57.177888 | orchestrator | ok 2026-01-09 00:00:57.205497 | 2026-01-09 00:00:57.205671 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-09 00:02:13.409390 | orchestrator | changed: 2026-01-09 00:02:13.410476 | orchestrator | .d..t...... src/ 2026-01-09 00:02:13.410543 | orchestrator | .d..t...... src/github.com/ 2026-01-09 00:02:13.410572 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-09 00:02:13.410596 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-09 00:02:13.410618 | orchestrator | RedHat.yml 2026-01-09 00:02:13.426381 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-09 00:02:13.426399 | orchestrator | RedHat.yml 2026-01-09 00:02:13.426450 | orchestrator | = 1.53.0"... 2026-01-09 00:02:25.219773 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-09 00:02:25.242245 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-09 00:02:25.410119 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-09 00:02:26.364999 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-09 00:02:26.434315 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-09 00:02:26.947191 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-09 00:02:27.019318 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-09 00:02:27.520837 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-09 00:02:27.520920 | orchestrator | 2026-01-09 00:02:27.520928 | orchestrator | Providers are signed by their developers. 2026-01-09 00:02:27.520934 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-09 00:02:27.520939 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-09 00:02:27.520954 | orchestrator | 2026-01-09 00:02:27.520960 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-09 00:02:27.520965 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-09 00:02:27.520985 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-09 00:02:27.520990 | orchestrator | you run "tofu init" in the future. 2026-01-09 00:02:27.521360 | orchestrator | 2026-01-09 00:02:27.521388 | orchestrator | OpenTofu has been successfully initialized! 2026-01-09 00:02:27.521409 | orchestrator | 2026-01-09 00:02:27.521414 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-09 00:02:27.521418 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-09 00:02:27.521422 | orchestrator | should now work. 2026-01-09 00:02:27.521426 | orchestrator | 2026-01-09 00:02:27.521430 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-09 00:02:27.521434 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-09 00:02:27.521439 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-09 00:02:27.705211 | orchestrator | Created and switched to workspace "ci"! 2026-01-09 00:02:27.705271 | orchestrator | 2026-01-09 00:02:27.705277 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-09 00:02:27.705283 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-09 00:02:27.705288 | orchestrator | for this configuration. 2026-01-09 00:02:27.822956 | orchestrator | ci.auto.tfvars 2026-01-09 00:02:27.984847 | orchestrator | default_custom.tf 2026-01-09 00:02:29.557693 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-09 00:02:30.660565 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-09 00:02:30.954094 | orchestrator | 2026-01-09 00:02:30.954182 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-09 00:02:30.954192 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-09 00:02:30.954197 | orchestrator | + create 2026-01-09 00:02:30.954202 | orchestrator | <= read (data resources) 2026-01-09 00:02:30.954207 | orchestrator | 2026-01-09 00:02:30.954211 | orchestrator | OpenTofu will perform the following actions: 2026-01-09 00:02:30.954216 | orchestrator | 2026-01-09 00:02:30.954220 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-09 00:02:30.954224 | orchestrator | # (config refers to values not yet known) 2026-01-09 00:02:30.954228 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-09 00:02:30.954233 | orchestrator | + checksum = (known after apply) 2026-01-09 00:02:30.954237 | orchestrator | + created_at = (known after apply) 2026-01-09 00:02:30.954241 | orchestrator | + file = (known after apply) 2026-01-09 00:02:30.954245 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954269 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.954274 | orchestrator | + min_disk_gb = (known after apply) 2026-01-09 00:02:30.954278 | orchestrator | + min_ram_mb = (known after apply) 2026-01-09 00:02:30.954282 | orchestrator | + most_recent = true 2026-01-09 00:02:30.954286 | orchestrator | + name = (known after apply) 2026-01-09 00:02:30.954290 | orchestrator | + protected = (known after apply) 2026-01-09 00:02:30.954294 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.954302 | orchestrator | + schema = (known after apply) 2026-01-09 00:02:30.954306 | orchestrator | + size_bytes = (known after apply) 2026-01-09 00:02:30.954310 | orchestrator | + tags = (known after apply) 2026-01-09 00:02:30.954314 | orchestrator | + updated_at = (known after apply) 2026-01-09 00:02:30.954318 | orchestrator | } 2026-01-09 00:02:30.954322 | orchestrator | 2026-01-09 00:02:30.954326 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-09 00:02:30.954330 | orchestrator | # (config refers to values not yet known) 2026-01-09 00:02:30.954334 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-09 00:02:30.954338 | orchestrator | + checksum = (known after apply) 2026-01-09 00:02:30.954341 | orchestrator | + created_at = (known after apply) 2026-01-09 00:02:30.954345 | orchestrator | + file = (known after apply) 2026-01-09 00:02:30.954349 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954353 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.954357 | orchestrator | + min_disk_gb = (known after apply) 2026-01-09 00:02:30.954361 | orchestrator | + min_ram_mb = (known after apply) 2026-01-09 00:02:30.954365 | orchestrator | + most_recent = true 2026-01-09 00:02:30.954369 | orchestrator | + name = (known after apply) 2026-01-09 00:02:30.954372 | orchestrator | + protected = (known after apply) 2026-01-09 00:02:30.954376 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.954380 | orchestrator | + schema = (known after apply) 2026-01-09 00:02:30.954384 | orchestrator | + size_bytes = (known after apply) 2026-01-09 00:02:30.954388 | orchestrator | + tags = (known after apply) 2026-01-09 00:02:30.954391 | orchestrator | + updated_at = (known after apply) 2026-01-09 00:02:30.954395 | orchestrator | } 2026-01-09 00:02:30.954399 | orchestrator | 2026-01-09 00:02:30.954403 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-09 00:02:30.954407 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-09 00:02:30.954411 | orchestrator | + content = (known after apply) 2026-01-09 00:02:30.954415 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-09 00:02:30.954419 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-09 00:02:30.954422 | orchestrator | + content_md5 = (known after apply) 2026-01-09 00:02:30.954426 | orchestrator | + content_sha1 = (known after apply) 2026-01-09 00:02:30.954430 | orchestrator | + content_sha256 = (known after apply) 2026-01-09 00:02:30.954434 | orchestrator | + content_sha512 = (known after apply) 2026-01-09 00:02:30.954437 | orchestrator | + directory_permission = "0777" 2026-01-09 00:02:30.954441 | orchestrator | + file_permission = "0644" 2026-01-09 00:02:30.954445 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-09 00:02:30.954449 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954453 | orchestrator | } 2026-01-09 00:02:30.954457 | orchestrator | 2026-01-09 00:02:30.954460 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-09 00:02:30.954464 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-09 00:02:30.954468 | orchestrator | + content = (known after apply) 2026-01-09 00:02:30.954472 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-09 00:02:30.954476 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-09 00:02:30.954479 | orchestrator | + content_md5 = (known after apply) 2026-01-09 00:02:30.954483 | orchestrator | + content_sha1 = (known after apply) 2026-01-09 00:02:30.954487 | orchestrator | + content_sha256 = (known after apply) 2026-01-09 00:02:30.954491 | orchestrator | + content_sha512 = (known after apply) 2026-01-09 00:02:30.954494 | orchestrator | + directory_permission = "0777" 2026-01-09 00:02:30.954498 | orchestrator | + file_permission = "0644" 2026-01-09 00:02:30.954506 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-09 00:02:30.954510 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954514 | orchestrator | } 2026-01-09 00:02:30.954518 | orchestrator | 2026-01-09 00:02:30.954525 | orchestrator | # local_file.inventory will be created 2026-01-09 00:02:30.954529 | orchestrator | + resource "local_file" "inventory" { 2026-01-09 00:02:30.954533 | orchestrator | + content = (known after apply) 2026-01-09 00:02:30.954537 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-09 00:02:30.954541 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-09 00:02:30.954545 | orchestrator | + content_md5 = (known after apply) 2026-01-09 00:02:30.954548 | orchestrator | + content_sha1 = (known after apply) 2026-01-09 00:02:30.954552 | orchestrator | + content_sha256 = (known after apply) 2026-01-09 00:02:30.954556 | orchestrator | + content_sha512 = (known after apply) 2026-01-09 00:02:30.954560 | orchestrator | + directory_permission = "0777" 2026-01-09 00:02:30.954564 | orchestrator | + file_permission = "0644" 2026-01-09 00:02:30.954568 | orchestrator | + filename = "inventory.ci" 2026-01-09 00:02:30.954571 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954575 | orchestrator | } 2026-01-09 00:02:30.954579 | orchestrator | 2026-01-09 00:02:30.954583 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-09 00:02:30.954587 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-09 00:02:30.954590 | orchestrator | + content = (sensitive value) 2026-01-09 00:02:30.954594 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-09 00:02:30.954598 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-09 00:02:30.954601 | orchestrator | + content_md5 = (known after apply) 2026-01-09 00:02:30.954606 | orchestrator | + content_sha1 = (known after apply) 2026-01-09 00:02:30.954609 | orchestrator | + content_sha256 = (known after apply) 2026-01-09 00:02:30.954625 | orchestrator | + content_sha512 = (known after apply) 2026-01-09 00:02:30.954629 | orchestrator | + directory_permission = "0700" 2026-01-09 00:02:30.954633 | orchestrator | + file_permission = "0600" 2026-01-09 00:02:30.954637 | orchestrator | + filename = ".id_rsa.ci" 2026-01-09 00:02:30.954641 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954644 | orchestrator | } 2026-01-09 00:02:30.954648 | orchestrator | 2026-01-09 00:02:30.954652 | orchestrator | # null_resource.node_semaphore will be created 2026-01-09 00:02:30.954656 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-09 00:02:30.954659 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954663 | orchestrator | } 2026-01-09 00:02:30.954667 | orchestrator | 2026-01-09 00:02:30.954671 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-09 00:02:30.954675 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-09 00:02:30.954678 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.954682 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.954686 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954690 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.954693 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.954697 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-09 00:02:30.954701 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.954704 | orchestrator | + size = 80 2026-01-09 00:02:30.954708 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.954712 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.954716 | orchestrator | } 2026-01-09 00:02:30.954719 | orchestrator | 2026-01-09 00:02:30.954723 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-09 00:02:30.954727 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-09 00:02:30.954731 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.954735 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.954738 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954746 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.954750 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.954754 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-09 00:02:30.954758 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.954761 | orchestrator | + size = 80 2026-01-09 00:02:30.954765 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.954769 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.954773 | orchestrator | } 2026-01-09 00:02:30.954776 | orchestrator | 2026-01-09 00:02:30.954780 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-09 00:02:30.954784 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-09 00:02:30.954788 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.954791 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.954795 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954799 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.954803 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.954806 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-09 00:02:30.954810 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.954814 | orchestrator | + size = 80 2026-01-09 00:02:30.954817 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.954821 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.954825 | orchestrator | } 2026-01-09 00:02:30.954829 | orchestrator | 2026-01-09 00:02:30.954832 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-09 00:02:30.954836 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-09 00:02:30.954840 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.954844 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.954847 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954851 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.954855 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.954859 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-09 00:02:30.954862 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.954866 | orchestrator | + size = 80 2026-01-09 00:02:30.954870 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.954874 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.954877 | orchestrator | } 2026-01-09 00:02:30.954881 | orchestrator | 2026-01-09 00:02:30.954885 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-09 00:02:30.954888 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-09 00:02:30.954892 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.954896 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.954900 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954903 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.954907 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.954913 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-09 00:02:30.954917 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.954921 | orchestrator | + size = 80 2026-01-09 00:02:30.954925 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.954928 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.954932 | orchestrator | } 2026-01-09 00:02:30.954936 | orchestrator | 2026-01-09 00:02:30.954940 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-09 00:02:30.954943 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-09 00:02:30.954947 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.954951 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.954955 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.954962 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.954966 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.954970 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-09 00:02:30.954974 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.954977 | orchestrator | + size = 80 2026-01-09 00:02:30.954981 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.954985 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.954989 | orchestrator | } 2026-01-09 00:02:30.954997 | orchestrator | 2026-01-09 00:02:30.955000 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-09 00:02:30.955008 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-09 00:02:30.955012 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.955015 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955019 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955023 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.955027 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.955031 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-09 00:02:30.955034 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955038 | orchestrator | + size = 80 2026-01-09 00:02:30.955042 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.955045 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.955049 | orchestrator | } 2026-01-09 00:02:30.955053 | orchestrator | 2026-01-09 00:02:30.955057 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-09 00:02:30.955061 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-09 00:02:30.955064 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.955068 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955072 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955076 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.955079 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-09 00:02:30.955083 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955087 | orchestrator | + size = 20 2026-01-09 00:02:30.955091 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.955094 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.955098 | orchestrator | } 2026-01-09 00:02:30.955102 | orchestrator | 2026-01-09 00:02:30.955106 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-09 00:02:30.955109 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-09 00:02:30.955113 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.955117 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955121 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955124 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.955128 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-09 00:02:30.955132 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955136 | orchestrator | + size = 20 2026-01-09 00:02:30.955139 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.955143 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.955157 | orchestrator | } 2026-01-09 00:02:30.955161 | orchestrator | 2026-01-09 00:02:30.955165 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-09 00:02:30.955169 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-09 00:02:30.955173 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.955176 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955180 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955184 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.955188 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-09 00:02:30.955191 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955199 | orchestrator | + size = 20 2026-01-09 00:02:30.955202 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.955206 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.955210 | orchestrator | } 2026-01-09 00:02:30.955214 | orchestrator | 2026-01-09 00:02:30.955217 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-09 00:02:30.955221 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-09 00:02:30.955225 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.955229 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955232 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955236 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.955240 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-09 00:02:30.955244 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955247 | orchestrator | + size = 20 2026-01-09 00:02:30.955251 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.955255 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.955258 | orchestrator | } 2026-01-09 00:02:30.955262 | orchestrator | 2026-01-09 00:02:30.955266 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-09 00:02:30.955270 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-09 00:02:30.955273 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.955277 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955281 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955285 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.955288 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-09 00:02:30.955292 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955298 | orchestrator | + size = 20 2026-01-09 00:02:30.955302 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.955306 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.955310 | orchestrator | } 2026-01-09 00:02:30.955314 | orchestrator | 2026-01-09 00:02:30.955317 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-09 00:02:30.955321 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-09 00:02:30.955325 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.955328 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955332 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955336 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.955340 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-09 00:02:30.955343 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955347 | orchestrator | + size = 20 2026-01-09 00:02:30.955351 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.955354 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.955358 | orchestrator | } 2026-01-09 00:02:30.955362 | orchestrator | 2026-01-09 00:02:30.955366 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-09 00:02:30.955369 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-09 00:02:30.955373 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.955377 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955381 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955388 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.955392 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-09 00:02:30.955396 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955400 | orchestrator | + size = 20 2026-01-09 00:02:30.955403 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.955407 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.955411 | orchestrator | } 2026-01-09 00:02:30.955415 | orchestrator | 2026-01-09 00:02:30.955418 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-09 00:02:30.955422 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-09 00:02:30.955429 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.955433 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955437 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955440 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.955444 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-09 00:02:30.955448 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955451 | orchestrator | + size = 20 2026-01-09 00:02:30.955455 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.955459 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.955463 | orchestrator | } 2026-01-09 00:02:30.955466 | orchestrator | 2026-01-09 00:02:30.955470 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-09 00:02:30.955474 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-09 00:02:30.955478 | orchestrator | + attachment = (known after apply) 2026-01-09 00:02:30.955482 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955485 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955489 | orchestrator | + metadata = (known after apply) 2026-01-09 00:02:30.955493 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-09 00:02:30.955497 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955500 | orchestrator | + size = 20 2026-01-09 00:02:30.955504 | orchestrator | + volume_retype_policy = "never" 2026-01-09 00:02:30.955508 | orchestrator | + volume_type = "ssd" 2026-01-09 00:02:30.955511 | orchestrator | } 2026-01-09 00:02:30.955515 | orchestrator | 2026-01-09 00:02:30.955519 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-09 00:02:30.955523 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-09 00:02:30.955527 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-09 00:02:30.955530 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-09 00:02:30.955534 | orchestrator | + all_metadata = (known after apply) 2026-01-09 00:02:30.955538 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.955541 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955545 | orchestrator | + config_drive = true 2026-01-09 00:02:30.955549 | orchestrator | + created = (known after apply) 2026-01-09 00:02:30.955553 | orchestrator | + flavor_id = (known after apply) 2026-01-09 00:02:30.955556 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-09 00:02:30.955560 | orchestrator | + force_delete = false 2026-01-09 00:02:30.955564 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-09 00:02:30.955567 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955571 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.955575 | orchestrator | + image_name = (known after apply) 2026-01-09 00:02:30.955579 | orchestrator | + key_pair = "testbed" 2026-01-09 00:02:30.955582 | orchestrator | + name = "testbed-manager" 2026-01-09 00:02:30.955586 | orchestrator | + power_state = "active" 2026-01-09 00:02:30.955590 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955593 | orchestrator | + security_groups = (known after apply) 2026-01-09 00:02:30.955597 | orchestrator | + stop_before_destroy = false 2026-01-09 00:02:30.955601 | orchestrator | + updated = (known after apply) 2026-01-09 00:02:30.955605 | orchestrator | + user_data = (sensitive value) 2026-01-09 00:02:30.955608 | orchestrator | 2026-01-09 00:02:30.955612 | orchestrator | + block_device { 2026-01-09 00:02:30.955616 | orchestrator | + boot_index = 0 2026-01-09 00:02:30.955620 | orchestrator | + delete_on_termination = false 2026-01-09 00:02:30.955626 | orchestrator | + destination_type = "volume" 2026-01-09 00:02:30.955630 | orchestrator | + multiattach = false 2026-01-09 00:02:30.955634 | orchestrator | + source_type = "volume" 2026-01-09 00:02:30.955638 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.955644 | orchestrator | } 2026-01-09 00:02:30.955648 | orchestrator | 2026-01-09 00:02:30.955652 | orchestrator | + network { 2026-01-09 00:02:30.955656 | orchestrator | + access_network = false 2026-01-09 00:02:30.955659 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-09 00:02:30.955663 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-09 00:02:30.955667 | orchestrator | + mac = (known after apply) 2026-01-09 00:02:30.955671 | orchestrator | + name = (known after apply) 2026-01-09 00:02:30.955674 | orchestrator | + port = (known after apply) 2026-01-09 00:02:30.955678 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.955682 | orchestrator | } 2026-01-09 00:02:30.955686 | orchestrator | } 2026-01-09 00:02:30.955690 | orchestrator | 2026-01-09 00:02:30.955693 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-09 00:02:30.955697 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-09 00:02:30.955701 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-09 00:02:30.955704 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-09 00:02:30.955708 | orchestrator | + all_metadata = (known after apply) 2026-01-09 00:02:30.955712 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.955716 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.955719 | orchestrator | + config_drive = true 2026-01-09 00:02:30.955723 | orchestrator | + created = (known after apply) 2026-01-09 00:02:30.955727 | orchestrator | + flavor_id = (known after apply) 2026-01-09 00:02:30.955731 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-09 00:02:30.955734 | orchestrator | + force_delete = false 2026-01-09 00:02:30.955738 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-09 00:02:30.955742 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.955745 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.955749 | orchestrator | + image_name = (known after apply) 2026-01-09 00:02:30.955753 | orchestrator | + key_pair = "testbed" 2026-01-09 00:02:30.955757 | orchestrator | + name = "testbed-node-0" 2026-01-09 00:02:30.955760 | orchestrator | + power_state = "active" 2026-01-09 00:02:30.955766 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.955770 | orchestrator | + security_groups = (known after apply) 2026-01-09 00:02:30.955774 | orchestrator | + stop_before_destroy = false 2026-01-09 00:02:30.955778 | orchestrator | + updated = (known after apply) 2026-01-09 00:02:30.955782 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-09 00:02:30.955785 | orchestrator | 2026-01-09 00:02:30.955789 | orchestrator | + block_device { 2026-01-09 00:02:30.955793 | orchestrator | + boot_index = 0 2026-01-09 00:02:30.955797 | orchestrator | + delete_on_termination = false 2026-01-09 00:02:30.955801 | orchestrator | + destination_type = "volume" 2026-01-09 00:02:30.955804 | orchestrator | + multiattach = false 2026-01-09 00:02:30.955808 | orchestrator | + source_type = "volume" 2026-01-09 00:02:30.955812 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.955816 | orchestrator | } 2026-01-09 00:02:30.955819 | orchestrator | 2026-01-09 00:02:30.955823 | orchestrator | + network { 2026-01-09 00:02:30.955827 | orchestrator | + access_network = false 2026-01-09 00:02:30.955831 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-09 00:02:30.955835 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-09 00:02:30.955838 | orchestrator | + mac = (known after apply) 2026-01-09 00:02:30.955842 | orchestrator | + name = (known after apply) 2026-01-09 00:02:30.955846 | orchestrator | + port = (known after apply) 2026-01-09 00:02:30.955850 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.955853 | orchestrator | } 2026-01-09 00:02:30.955857 | orchestrator | } 2026-01-09 00:02:30.960246 | orchestrator | 2026-01-09 00:02:30.960301 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-09 00:02:30.960307 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-09 00:02:30.960312 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-09 00:02:30.960332 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-09 00:02:30.960336 | orchestrator | + all_metadata = (known after apply) 2026-01-09 00:02:30.960340 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.960344 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.960348 | orchestrator | + config_drive = true 2026-01-09 00:02:30.960352 | orchestrator | + created = (known after apply) 2026-01-09 00:02:30.960356 | orchestrator | + flavor_id = (known after apply) 2026-01-09 00:02:30.960360 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-09 00:02:30.960364 | orchestrator | + force_delete = false 2026-01-09 00:02:30.960368 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-09 00:02:30.960371 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.960375 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.960379 | orchestrator | + image_name = (known after apply) 2026-01-09 00:02:30.960383 | orchestrator | + key_pair = "testbed" 2026-01-09 00:02:30.960386 | orchestrator | + name = "testbed-node-1" 2026-01-09 00:02:30.960390 | orchestrator | + power_state = "active" 2026-01-09 00:02:30.960394 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.960398 | orchestrator | + security_groups = (known after apply) 2026-01-09 00:02:30.960402 | orchestrator | + stop_before_destroy = false 2026-01-09 00:02:30.960405 | orchestrator | + updated = (known after apply) 2026-01-09 00:02:30.960409 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-09 00:02:30.960414 | orchestrator | 2026-01-09 00:02:30.960418 | orchestrator | + block_device { 2026-01-09 00:02:30.960422 | orchestrator | + boot_index = 0 2026-01-09 00:02:30.960425 | orchestrator | + delete_on_termination = false 2026-01-09 00:02:30.960429 | orchestrator | + destination_type = "volume" 2026-01-09 00:02:30.960433 | orchestrator | + multiattach = false 2026-01-09 00:02:30.960437 | orchestrator | + source_type = "volume" 2026-01-09 00:02:30.960440 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.960444 | orchestrator | } 2026-01-09 00:02:30.960448 | orchestrator | 2026-01-09 00:02:30.960452 | orchestrator | + network { 2026-01-09 00:02:30.960456 | orchestrator | + access_network = false 2026-01-09 00:02:30.960459 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-09 00:02:30.960463 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-09 00:02:30.960467 | orchestrator | + mac = (known after apply) 2026-01-09 00:02:30.960471 | orchestrator | + name = (known after apply) 2026-01-09 00:02:30.960474 | orchestrator | + port = (known after apply) 2026-01-09 00:02:30.960478 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.960482 | orchestrator | } 2026-01-09 00:02:30.960486 | orchestrator | } 2026-01-09 00:02:30.960797 | orchestrator | 2026-01-09 00:02:30.960813 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-09 00:02:30.960818 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-09 00:02:30.960822 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-09 00:02:30.960826 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-09 00:02:30.960832 | orchestrator | + all_metadata = (known after apply) 2026-01-09 00:02:30.960835 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.960847 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.960851 | orchestrator | + config_drive = true 2026-01-09 00:02:30.960855 | orchestrator | + created = (known after apply) 2026-01-09 00:02:30.960858 | orchestrator | + flavor_id = (known after apply) 2026-01-09 00:02:30.960862 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-09 00:02:30.960866 | orchestrator | + force_delete = false 2026-01-09 00:02:30.960870 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-09 00:02:30.960873 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.960877 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.960886 | orchestrator | + image_name = (known after apply) 2026-01-09 00:02:30.960890 | orchestrator | + key_pair = "testbed" 2026-01-09 00:02:30.960894 | orchestrator | + name = "testbed-node-2" 2026-01-09 00:02:30.960898 | orchestrator | + power_state = "active" 2026-01-09 00:02:30.960901 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.960905 | orchestrator | + security_groups = (known after apply) 2026-01-09 00:02:30.960909 | orchestrator | + stop_before_destroy = false 2026-01-09 00:02:30.960913 | orchestrator | + updated = (known after apply) 2026-01-09 00:02:30.960916 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-09 00:02:30.960920 | orchestrator | 2026-01-09 00:02:30.960924 | orchestrator | + block_device { 2026-01-09 00:02:30.960928 | orchestrator | + boot_index = 0 2026-01-09 00:02:30.960931 | orchestrator | + delete_on_termination = false 2026-01-09 00:02:30.960935 | orchestrator | + destination_type = "volume" 2026-01-09 00:02:30.960939 | orchestrator | + multiattach = false 2026-01-09 00:02:30.960943 | orchestrator | + source_type = "volume" 2026-01-09 00:02:30.960946 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.960950 | orchestrator | } 2026-01-09 00:02:30.960954 | orchestrator | 2026-01-09 00:02:30.960958 | orchestrator | + network { 2026-01-09 00:02:30.960962 | orchestrator | + access_network = false 2026-01-09 00:02:30.960965 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-09 00:02:30.960969 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-09 00:02:30.960973 | orchestrator | + mac = (known after apply) 2026-01-09 00:02:30.960977 | orchestrator | + name = (known after apply) 2026-01-09 00:02:30.960981 | orchestrator | + port = (known after apply) 2026-01-09 00:02:30.960984 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.960988 | orchestrator | } 2026-01-09 00:02:30.960992 | orchestrator | } 2026-01-09 00:02:30.961312 | orchestrator | 2026-01-09 00:02:30.961329 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-09 00:02:30.961333 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-09 00:02:30.961337 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-09 00:02:30.961341 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-09 00:02:30.961345 | orchestrator | + all_metadata = (known after apply) 2026-01-09 00:02:30.961348 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.961352 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.961356 | orchestrator | + config_drive = true 2026-01-09 00:02:30.961369 | orchestrator | + created = (known after apply) 2026-01-09 00:02:30.961373 | orchestrator | + flavor_id = (known after apply) 2026-01-09 00:02:30.961376 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-09 00:02:30.961380 | orchestrator | + force_delete = false 2026-01-09 00:02:30.961384 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-09 00:02:30.961388 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.961391 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.961395 | orchestrator | + image_name = (known after apply) 2026-01-09 00:02:30.961399 | orchestrator | + key_pair = "testbed" 2026-01-09 00:02:30.961403 | orchestrator | + name = "testbed-node-3" 2026-01-09 00:02:30.961406 | orchestrator | + power_state = "active" 2026-01-09 00:02:30.961410 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.961414 | orchestrator | + security_groups = (known after apply) 2026-01-09 00:02:30.961418 | orchestrator | + stop_before_destroy = false 2026-01-09 00:02:30.961421 | orchestrator | + updated = (known after apply) 2026-01-09 00:02:30.961425 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-09 00:02:30.961429 | orchestrator | 2026-01-09 00:02:30.961433 | orchestrator | + block_device { 2026-01-09 00:02:30.961441 | orchestrator | + boot_index = 0 2026-01-09 00:02:30.961445 | orchestrator | + delete_on_termination = false 2026-01-09 00:02:30.961449 | orchestrator | + destination_type = "volume" 2026-01-09 00:02:30.961470 | orchestrator | + multiattach = false 2026-01-09 00:02:30.961475 | orchestrator | + source_type = "volume" 2026-01-09 00:02:30.961478 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.961482 | orchestrator | } 2026-01-09 00:02:30.961486 | orchestrator | 2026-01-09 00:02:30.961490 | orchestrator | + network { 2026-01-09 00:02:30.961493 | orchestrator | + access_network = false 2026-01-09 00:02:30.961497 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-09 00:02:30.961501 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-09 00:02:30.961505 | orchestrator | + mac = (known after apply) 2026-01-09 00:02:30.961508 | orchestrator | + name = (known after apply) 2026-01-09 00:02:30.961512 | orchestrator | + port = (known after apply) 2026-01-09 00:02:30.961516 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.961520 | orchestrator | } 2026-01-09 00:02:30.961524 | orchestrator | } 2026-01-09 00:02:30.961800 | orchestrator | 2026-01-09 00:02:30.961815 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-09 00:02:30.961819 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-09 00:02:30.961823 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-09 00:02:30.961827 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-09 00:02:30.961831 | orchestrator | + all_metadata = (known after apply) 2026-01-09 00:02:30.961835 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.961838 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.961842 | orchestrator | + config_drive = true 2026-01-09 00:02:30.961846 | orchestrator | + created = (known after apply) 2026-01-09 00:02:30.961850 | orchestrator | + flavor_id = (known after apply) 2026-01-09 00:02:30.961853 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-09 00:02:30.961857 | orchestrator | + force_delete = false 2026-01-09 00:02:30.961861 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-09 00:02:30.961865 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.961868 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.961872 | orchestrator | + image_name = (known after apply) 2026-01-09 00:02:30.961876 | orchestrator | + key_pair = "testbed" 2026-01-09 00:02:30.961880 | orchestrator | + name = "testbed-node-4" 2026-01-09 00:02:30.961883 | orchestrator | + power_state = "active" 2026-01-09 00:02:30.961887 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.961891 | orchestrator | + security_groups = (known after apply) 2026-01-09 00:02:30.961895 | orchestrator | + stop_before_destroy = false 2026-01-09 00:02:30.961898 | orchestrator | + updated = (known after apply) 2026-01-09 00:02:30.961902 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-09 00:02:30.961906 | orchestrator | 2026-01-09 00:02:30.961910 | orchestrator | + block_device { 2026-01-09 00:02:30.961914 | orchestrator | + boot_index = 0 2026-01-09 00:02:30.961917 | orchestrator | + delete_on_termination = false 2026-01-09 00:02:30.961921 | orchestrator | + destination_type = "volume" 2026-01-09 00:02:30.961925 | orchestrator | + multiattach = false 2026-01-09 00:02:30.961929 | orchestrator | + source_type = "volume" 2026-01-09 00:02:30.961932 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.961936 | orchestrator | } 2026-01-09 00:02:30.961940 | orchestrator | 2026-01-09 00:02:30.961944 | orchestrator | + network { 2026-01-09 00:02:30.961947 | orchestrator | + access_network = false 2026-01-09 00:02:30.961951 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-09 00:02:30.961955 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-09 00:02:30.961959 | orchestrator | + mac = (known after apply) 2026-01-09 00:02:30.961962 | orchestrator | + name = (known after apply) 2026-01-09 00:02:30.961966 | orchestrator | + port = (known after apply) 2026-01-09 00:02:30.961970 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.961974 | orchestrator | } 2026-01-09 00:02:30.961977 | orchestrator | } 2026-01-09 00:02:30.963115 | orchestrator | 2026-01-09 00:02:30.963169 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-09 00:02:30.963175 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-09 00:02:30.963179 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-09 00:02:30.963184 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-09 00:02:30.963188 | orchestrator | + all_metadata = (known after apply) 2026-01-09 00:02:30.963193 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.963197 | orchestrator | + availability_zone = "nova" 2026-01-09 00:02:30.963201 | orchestrator | + config_drive = true 2026-01-09 00:02:30.963205 | orchestrator | + created = (known after apply) 2026-01-09 00:02:30.963209 | orchestrator | + flavor_id = (known after apply) 2026-01-09 00:02:30.963213 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-09 00:02:30.963217 | orchestrator | + force_delete = false 2026-01-09 00:02:30.963227 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-09 00:02:30.963231 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.963234 | orchestrator | + image_id = (known after apply) 2026-01-09 00:02:30.963238 | orchestrator | + image_name = (known after apply) 2026-01-09 00:02:30.963242 | orchestrator | + key_pair = "testbed" 2026-01-09 00:02:30.963246 | orchestrator | + name = "testbed-node-5" 2026-01-09 00:02:30.963249 | orchestrator | + power_state = "active" 2026-01-09 00:02:30.963253 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.963257 | orchestrator | + security_groups = (known after apply) 2026-01-09 00:02:30.963261 | orchestrator | + stop_before_destroy = false 2026-01-09 00:02:30.963264 | orchestrator | + updated = (known after apply) 2026-01-09 00:02:30.963268 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-09 00:02:30.963272 | orchestrator | 2026-01-09 00:02:30.963276 | orchestrator | + block_device { 2026-01-09 00:02:30.963280 | orchestrator | + boot_index = 0 2026-01-09 00:02:30.963284 | orchestrator | + delete_on_termination = false 2026-01-09 00:02:30.963287 | orchestrator | + destination_type = "volume" 2026-01-09 00:02:30.963291 | orchestrator | + multiattach = false 2026-01-09 00:02:30.963295 | orchestrator | + source_type = "volume" 2026-01-09 00:02:30.963298 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.963302 | orchestrator | } 2026-01-09 00:02:30.963306 | orchestrator | 2026-01-09 00:02:30.963310 | orchestrator | + network { 2026-01-09 00:02:30.963313 | orchestrator | + access_network = false 2026-01-09 00:02:30.963317 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-09 00:02:30.963321 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-09 00:02:30.963325 | orchestrator | + mac = (known after apply) 2026-01-09 00:02:30.963329 | orchestrator | + name = (known after apply) 2026-01-09 00:02:30.963333 | orchestrator | + port = (known after apply) 2026-01-09 00:02:30.963336 | orchestrator | + uuid = (known after apply) 2026-01-09 00:02:30.963340 | orchestrator | } 2026-01-09 00:02:30.963344 | orchestrator | } 2026-01-09 00:02:30.963398 | orchestrator | 2026-01-09 00:02:30.963409 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-09 00:02:30.963413 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-09 00:02:30.963417 | orchestrator | + fingerprint = (known after apply) 2026-01-09 00:02:30.963421 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.963425 | orchestrator | + name = "testbed" 2026-01-09 00:02:30.963429 | orchestrator | + private_key = (sensitive value) 2026-01-09 00:02:30.963432 | orchestrator | + public_key = (known after apply) 2026-01-09 00:02:30.963436 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.963440 | orchestrator | + user_id = (known after apply) 2026-01-09 00:02:30.963444 | orchestrator | } 2026-01-09 00:02:30.963592 | orchestrator | 2026-01-09 00:02:30.963608 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-09 00:02:30.963612 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-09 00:02:30.963626 | orchestrator | + device = (known after apply) 2026-01-09 00:02:30.963630 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.963634 | orchestrator | + instance_id = (known after apply) 2026-01-09 00:02:30.963638 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.963641 | orchestrator | + volume_id = (known after apply) 2026-01-09 00:02:30.963645 | orchestrator | } 2026-01-09 00:02:30.963692 | orchestrator | 2026-01-09 00:02:30.963703 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-09 00:02:30.963708 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-09 00:02:30.963712 | orchestrator | + device = (known after apply) 2026-01-09 00:02:30.963716 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.963720 | orchestrator | + instance_id = (known after apply) 2026-01-09 00:02:30.963724 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.963728 | orchestrator | + volume_id = (known after apply) 2026-01-09 00:02:30.963731 | orchestrator | } 2026-01-09 00:02:30.963775 | orchestrator | 2026-01-09 00:02:30.963786 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-09 00:02:30.963791 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-09 00:02:30.963795 | orchestrator | + device = (known after apply) 2026-01-09 00:02:30.963798 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.963802 | orchestrator | + instance_id = (known after apply) 2026-01-09 00:02:30.963806 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.963810 | orchestrator | + volume_id = (known after apply) 2026-01-09 00:02:30.963814 | orchestrator | } 2026-01-09 00:02:30.963855 | orchestrator | 2026-01-09 00:02:30.963867 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-09 00:02:30.963871 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-09 00:02:30.963875 | orchestrator | + device = (known after apply) 2026-01-09 00:02:30.963879 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.963883 | orchestrator | + instance_id = (known after apply) 2026-01-09 00:02:30.963886 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.963890 | orchestrator | + volume_id = (known after apply) 2026-01-09 00:02:30.963894 | orchestrator | } 2026-01-09 00:02:30.963936 | orchestrator | 2026-01-09 00:02:30.963947 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-09 00:02:30.963952 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-09 00:02:30.963956 | orchestrator | + device = (known after apply) 2026-01-09 00:02:30.963960 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.963963 | orchestrator | + instance_id = (known after apply) 2026-01-09 00:02:30.963971 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.963975 | orchestrator | + volume_id = (known after apply) 2026-01-09 00:02:30.963979 | orchestrator | } 2026-01-09 00:02:30.964015 | orchestrator | 2026-01-09 00:02:30.964032 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-09 00:02:30.964036 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-09 00:02:30.964040 | orchestrator | + device = (known after apply) 2026-01-09 00:02:30.964044 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.964047 | orchestrator | + instance_id = (known after apply) 2026-01-09 00:02:30.964051 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.964055 | orchestrator | + volume_id = (known after apply) 2026-01-09 00:02:30.964059 | orchestrator | } 2026-01-09 00:02:30.964098 | orchestrator | 2026-01-09 00:02:30.964110 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-09 00:02:30.964115 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-09 00:02:30.964118 | orchestrator | + device = (known after apply) 2026-01-09 00:02:30.964122 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.964126 | orchestrator | + instance_id = (known after apply) 2026-01-09 00:02:30.964130 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.964138 | orchestrator | + volume_id = (known after apply) 2026-01-09 00:02:30.964141 | orchestrator | } 2026-01-09 00:02:30.964217 | orchestrator | 2026-01-09 00:02:30.964229 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-09 00:02:30.964234 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-09 00:02:30.964238 | orchestrator | + device = (known after apply) 2026-01-09 00:02:30.964242 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.964246 | orchestrator | + instance_id = (known after apply) 2026-01-09 00:02:30.964250 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.964254 | orchestrator | + volume_id = (known after apply) 2026-01-09 00:02:30.964257 | orchestrator | } 2026-01-09 00:02:30.964297 | orchestrator | 2026-01-09 00:02:30.964309 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-09 00:02:30.964313 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-09 00:02:30.964317 | orchestrator | + device = (known after apply) 2026-01-09 00:02:30.964321 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.964325 | orchestrator | + instance_id = (known after apply) 2026-01-09 00:02:30.964329 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.964333 | orchestrator | + volume_id = (known after apply) 2026-01-09 00:02:30.964336 | orchestrator | } 2026-01-09 00:02:30.964380 | orchestrator | 2026-01-09 00:02:30.964391 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-09 00:02:30.964397 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-09 00:02:30.964401 | orchestrator | + fixed_ip = (known after apply) 2026-01-09 00:02:30.964404 | orchestrator | + floating_ip = (known after apply) 2026-01-09 00:02:30.964408 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.964412 | orchestrator | + port_id = (known after apply) 2026-01-09 00:02:30.964416 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.964420 | orchestrator | } 2026-01-09 00:02:30.964493 | orchestrator | 2026-01-09 00:02:30.964505 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-09 00:02:30.964510 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-09 00:02:30.964514 | orchestrator | + address = (known after apply) 2026-01-09 00:02:30.964518 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.964521 | orchestrator | + dns_domain = (known after apply) 2026-01-09 00:02:30.964525 | orchestrator | + dns_name = (known after apply) 2026-01-09 00:02:30.964529 | orchestrator | + fixed_ip = (known after apply) 2026-01-09 00:02:30.964533 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.964537 | orchestrator | + pool = "public" 2026-01-09 00:02:30.964541 | orchestrator | + port_id = (known after apply) 2026-01-09 00:02:30.964544 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.964548 | orchestrator | + subnet_id = (known after apply) 2026-01-09 00:02:30.964552 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.964556 | orchestrator | } 2026-01-09 00:02:30.964661 | orchestrator | 2026-01-09 00:02:30.964673 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-09 00:02:30.964678 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-09 00:02:30.964681 | orchestrator | + admin_state_up = (known after apply) 2026-01-09 00:02:30.964685 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.964689 | orchestrator | + availability_zone_hints = [ 2026-01-09 00:02:30.964693 | orchestrator | + "nova", 2026-01-09 00:02:30.964697 | orchestrator | ] 2026-01-09 00:02:30.964700 | orchestrator | + dns_domain = (known after apply) 2026-01-09 00:02:30.964704 | orchestrator | + external = (known after apply) 2026-01-09 00:02:30.964708 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.964712 | orchestrator | + mtu = (known after apply) 2026-01-09 00:02:30.964716 | orchestrator | + name = "net-testbed-management" 2026-01-09 00:02:30.964719 | orchestrator | + port_security_enabled = (known after apply) 2026-01-09 00:02:30.964727 | orchestrator | + qos_policy_id = (known after apply) 2026-01-09 00:02:30.964731 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.964735 | orchestrator | + shared = (known after apply) 2026-01-09 00:02:30.964739 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.964742 | orchestrator | + transparent_vlan = (known after apply) 2026-01-09 00:02:30.964746 | orchestrator | 2026-01-09 00:02:30.964750 | orchestrator | + segments (known after apply) 2026-01-09 00:02:30.964754 | orchestrator | } 2026-01-09 00:02:30.964886 | orchestrator | 2026-01-09 00:02:30.964898 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-09 00:02:30.964903 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-09 00:02:30.964907 | orchestrator | + admin_state_up = (known after apply) 2026-01-09 00:02:30.964910 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-09 00:02:30.964914 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-09 00:02:30.964921 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.964925 | orchestrator | + device_id = (known after apply) 2026-01-09 00:02:30.964929 | orchestrator | + device_owner = (known after apply) 2026-01-09 00:02:30.964933 | orchestrator | + dns_assignment = (known after apply) 2026-01-09 00:02:30.964936 | orchestrator | + dns_name = (known after apply) 2026-01-09 00:02:30.964940 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.964944 | orchestrator | + mac_address = (known after apply) 2026-01-09 00:02:30.964948 | orchestrator | + network_id = (known after apply) 2026-01-09 00:02:30.964951 | orchestrator | + port_security_enabled = (known after apply) 2026-01-09 00:02:30.964955 | orchestrator | + qos_policy_id = (known after apply) 2026-01-09 00:02:30.964959 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.964963 | orchestrator | + security_group_ids = (known after apply) 2026-01-09 00:02:30.964966 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.964970 | orchestrator | 2026-01-09 00:02:30.964974 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.964978 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-09 00:02:30.964981 | orchestrator | } 2026-01-09 00:02:30.964985 | orchestrator | 2026-01-09 00:02:30.964989 | orchestrator | + binding (known after apply) 2026-01-09 00:02:30.964993 | orchestrator | 2026-01-09 00:02:30.964996 | orchestrator | + fixed_ip { 2026-01-09 00:02:30.965000 | orchestrator | + ip_address = "192.168.16.5" 2026-01-09 00:02:30.965004 | orchestrator | + subnet_id = (known after apply) 2026-01-09 00:02:30.965008 | orchestrator | } 2026-01-09 00:02:30.965012 | orchestrator | } 2026-01-09 00:02:30.965168 | orchestrator | 2026-01-09 00:02:30.965181 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-09 00:02:30.965185 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-09 00:02:30.965189 | orchestrator | + admin_state_up = (known after apply) 2026-01-09 00:02:30.965193 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-09 00:02:30.965197 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-09 00:02:30.965200 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.965204 | orchestrator | + device_id = (known after apply) 2026-01-09 00:02:30.965208 | orchestrator | + device_owner = (known after apply) 2026-01-09 00:02:30.965212 | orchestrator | + dns_assignment = (known after apply) 2026-01-09 00:02:30.965216 | orchestrator | + dns_name = (known after apply) 2026-01-09 00:02:30.965219 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.965223 | orchestrator | + mac_address = (known after apply) 2026-01-09 00:02:30.965227 | orchestrator | + network_id = (known after apply) 2026-01-09 00:02:30.965231 | orchestrator | + port_security_enabled = (known after apply) 2026-01-09 00:02:30.965235 | orchestrator | + qos_policy_id = (known after apply) 2026-01-09 00:02:30.965238 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.965246 | orchestrator | + security_group_ids = (known after apply) 2026-01-09 00:02:30.965250 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.965254 | orchestrator | 2026-01-09 00:02:30.965258 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.965261 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-09 00:02:30.965265 | orchestrator | } 2026-01-09 00:02:30.965269 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.965273 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-09 00:02:30.965277 | orchestrator | } 2026-01-09 00:02:30.965280 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.965284 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-09 00:02:30.965288 | orchestrator | } 2026-01-09 00:02:30.965292 | orchestrator | 2026-01-09 00:02:30.965296 | orchestrator | + binding (known after apply) 2026-01-09 00:02:30.965299 | orchestrator | 2026-01-09 00:02:30.965303 | orchestrator | + fixed_ip { 2026-01-09 00:02:30.965307 | orchestrator | + ip_address = "192.168.16.10" 2026-01-09 00:02:30.965311 | orchestrator | + subnet_id = (known after apply) 2026-01-09 00:02:30.965315 | orchestrator | } 2026-01-09 00:02:30.965318 | orchestrator | } 2026-01-09 00:02:30.965569 | orchestrator | 2026-01-09 00:02:30.965585 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-09 00:02:30.965590 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-09 00:02:30.965594 | orchestrator | + admin_state_up = (known after apply) 2026-01-09 00:02:30.965598 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-09 00:02:30.965602 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-09 00:02:30.965605 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.965609 | orchestrator | + device_id = (known after apply) 2026-01-09 00:02:30.965613 | orchestrator | + device_owner = (known after apply) 2026-01-09 00:02:30.965617 | orchestrator | + dns_assignment = (known after apply) 2026-01-09 00:02:30.965621 | orchestrator | + dns_name = (known after apply) 2026-01-09 00:02:30.965624 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.965628 | orchestrator | + mac_address = (known after apply) 2026-01-09 00:02:30.965632 | orchestrator | + network_id = (known after apply) 2026-01-09 00:02:30.965636 | orchestrator | + port_security_enabled = (known after apply) 2026-01-09 00:02:30.965640 | orchestrator | + qos_policy_id = (known after apply) 2026-01-09 00:02:30.965643 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.965647 | orchestrator | + security_group_ids = (known after apply) 2026-01-09 00:02:30.965651 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.965655 | orchestrator | 2026-01-09 00:02:30.965658 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.965662 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-09 00:02:30.965666 | orchestrator | } 2026-01-09 00:02:30.965670 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.965673 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-09 00:02:30.965677 | orchestrator | } 2026-01-09 00:02:30.965681 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.965685 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-09 00:02:30.965688 | orchestrator | } 2026-01-09 00:02:30.965692 | orchestrator | 2026-01-09 00:02:30.965696 | orchestrator | + binding (known after apply) 2026-01-09 00:02:30.965700 | orchestrator | 2026-01-09 00:02:30.965704 | orchestrator | + fixed_ip { 2026-01-09 00:02:30.965707 | orchestrator | + ip_address = "192.168.16.11" 2026-01-09 00:02:30.965711 | orchestrator | + subnet_id = (known after apply) 2026-01-09 00:02:30.965716 | orchestrator | } 2026-01-09 00:02:30.965719 | orchestrator | } 2026-01-09 00:02:30.965868 | orchestrator | 2026-01-09 00:02:30.965880 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-09 00:02:30.965884 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-09 00:02:30.965888 | orchestrator | + admin_state_up = (known after apply) 2026-01-09 00:02:30.965892 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-09 00:02:30.965896 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-09 00:02:30.965900 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.965911 | orchestrator | + device_id = (known after apply) 2026-01-09 00:02:30.965915 | orchestrator | + device_owner = (known after apply) 2026-01-09 00:02:30.965918 | orchestrator | + dns_assignment = (known after apply) 2026-01-09 00:02:30.965922 | orchestrator | + dns_name = (known after apply) 2026-01-09 00:02:30.965929 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.965933 | orchestrator | + mac_address = (known after apply) 2026-01-09 00:02:30.965936 | orchestrator | + network_id = (known after apply) 2026-01-09 00:02:30.965940 | orchestrator | + port_security_enabled = (known after apply) 2026-01-09 00:02:30.965944 | orchestrator | + qos_policy_id = (known after apply) 2026-01-09 00:02:30.965948 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.965951 | orchestrator | + security_group_ids = (known after apply) 2026-01-09 00:02:30.965955 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.965959 | orchestrator | 2026-01-09 00:02:30.965963 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.965966 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-09 00:02:30.965970 | orchestrator | } 2026-01-09 00:02:30.965974 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.965978 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-09 00:02:30.965981 | orchestrator | } 2026-01-09 00:02:30.965985 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.965989 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-09 00:02:30.965993 | orchestrator | } 2026-01-09 00:02:30.965996 | orchestrator | 2026-01-09 00:02:30.966000 | orchestrator | + binding (known after apply) 2026-01-09 00:02:30.966004 | orchestrator | 2026-01-09 00:02:30.966008 | orchestrator | + fixed_ip { 2026-01-09 00:02:30.966012 | orchestrator | + ip_address = "192.168.16.12" 2026-01-09 00:02:30.966036 | orchestrator | + subnet_id = (known after apply) 2026-01-09 00:02:30.966040 | orchestrator | } 2026-01-09 00:02:30.966044 | orchestrator | } 2026-01-09 00:02:30.966207 | orchestrator | 2026-01-09 00:02:30.966219 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-09 00:02:30.966223 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-09 00:02:30.966227 | orchestrator | + admin_state_up = (known after apply) 2026-01-09 00:02:30.966231 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-09 00:02:30.966235 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-09 00:02:30.966239 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.966243 | orchestrator | + device_id = (known after apply) 2026-01-09 00:02:30.966246 | orchestrator | + device_owner = (known after apply) 2026-01-09 00:02:30.966250 | orchestrator | + dns_assignment = (known after apply) 2026-01-09 00:02:30.966254 | orchestrator | + dns_name = (known after apply) 2026-01-09 00:02:30.966258 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.966262 | orchestrator | + mac_address = (known after apply) 2026-01-09 00:02:30.966265 | orchestrator | + network_id = (known after apply) 2026-01-09 00:02:30.966269 | orchestrator | + port_security_enabled = (known after apply) 2026-01-09 00:02:30.966273 | orchestrator | + qos_policy_id = (known after apply) 2026-01-09 00:02:30.966277 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.966280 | orchestrator | + security_group_ids = (known after apply) 2026-01-09 00:02:30.966284 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.966288 | orchestrator | 2026-01-09 00:02:30.966292 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.966296 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-09 00:02:30.966299 | orchestrator | } 2026-01-09 00:02:30.966303 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.966307 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-09 00:02:30.966311 | orchestrator | } 2026-01-09 00:02:30.966315 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.966318 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-09 00:02:30.966322 | orchestrator | } 2026-01-09 00:02:30.966326 | orchestrator | 2026-01-09 00:02:30.966334 | orchestrator | + binding (known after apply) 2026-01-09 00:02:30.966338 | orchestrator | 2026-01-09 00:02:30.966341 | orchestrator | + fixed_ip { 2026-01-09 00:02:30.966345 | orchestrator | + ip_address = "192.168.16.13" 2026-01-09 00:02:30.966349 | orchestrator | + subnet_id = (known after apply) 2026-01-09 00:02:30.966353 | orchestrator | } 2026-01-09 00:02:30.966356 | orchestrator | } 2026-01-09 00:02:30.966496 | orchestrator | 2026-01-09 00:02:30.966507 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-09 00:02:30.966512 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-09 00:02:30.966516 | orchestrator | + admin_state_up = (known after apply) 2026-01-09 00:02:30.966519 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-09 00:02:30.966523 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-09 00:02:30.966527 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.966531 | orchestrator | + device_id = (known after apply) 2026-01-09 00:02:30.966535 | orchestrator | + device_owner = (known after apply) 2026-01-09 00:02:30.966538 | orchestrator | + dns_assignment = (known after apply) 2026-01-09 00:02:30.966542 | orchestrator | + dns_name = (known after apply) 2026-01-09 00:02:30.966546 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.966550 | orchestrator | + mac_address = (known after apply) 2026-01-09 00:02:30.966553 | orchestrator | + network_id = (known after apply) 2026-01-09 00:02:30.966557 | orchestrator | + port_security_enabled = (known after apply) 2026-01-09 00:02:30.966561 | orchestrator | + qos_policy_id = (known after apply) 2026-01-09 00:02:30.966565 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.966569 | orchestrator | + security_group_ids = (known after apply) 2026-01-09 00:02:30.966573 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.966578 | orchestrator | 2026-01-09 00:02:30.966582 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.966586 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-09 00:02:30.966590 | orchestrator | } 2026-01-09 00:02:30.966593 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.966597 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-09 00:02:30.966601 | orchestrator | } 2026-01-09 00:02:30.966605 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.966609 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-09 00:02:30.966612 | orchestrator | } 2026-01-09 00:02:30.966616 | orchestrator | 2026-01-09 00:02:30.966620 | orchestrator | + binding (known after apply) 2026-01-09 00:02:30.966624 | orchestrator | 2026-01-09 00:02:30.966628 | orchestrator | + fixed_ip { 2026-01-09 00:02:30.966631 | orchestrator | + ip_address = "192.168.16.14" 2026-01-09 00:02:30.966635 | orchestrator | + subnet_id = (known after apply) 2026-01-09 00:02:30.966639 | orchestrator | } 2026-01-09 00:02:30.966643 | orchestrator | } 2026-01-09 00:02:30.966786 | orchestrator | 2026-01-09 00:02:30.966797 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-09 00:02:30.966802 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-09 00:02:30.966806 | orchestrator | + admin_state_up = (known after apply) 2026-01-09 00:02:30.966810 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-09 00:02:30.966813 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-09 00:02:30.966817 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.966821 | orchestrator | + device_id = (known after apply) 2026-01-09 00:02:30.966825 | orchestrator | + device_owner = (known after apply) 2026-01-09 00:02:30.966829 | orchestrator | + dns_assignment = (known after apply) 2026-01-09 00:02:30.966832 | orchestrator | + dns_name = (known after apply) 2026-01-09 00:02:30.966836 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.966840 | orchestrator | + mac_address = (known after apply) 2026-01-09 00:02:30.966843 | orchestrator | + network_id = (known after apply) 2026-01-09 00:02:30.966847 | orchestrator | + port_security_enabled = (known after apply) 2026-01-09 00:02:30.966851 | orchestrator | + qos_policy_id = (known after apply) 2026-01-09 00:02:30.966858 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.966862 | orchestrator | + security_group_ids = (known after apply) 2026-01-09 00:02:30.966866 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.966870 | orchestrator | 2026-01-09 00:02:30.966873 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.966877 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-09 00:02:30.966881 | orchestrator | } 2026-01-09 00:02:30.966885 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.966888 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-09 00:02:30.966892 | orchestrator | } 2026-01-09 00:02:30.966896 | orchestrator | + allowed_address_pairs { 2026-01-09 00:02:30.966900 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-09 00:02:30.966903 | orchestrator | } 2026-01-09 00:02:30.966907 | orchestrator | 2026-01-09 00:02:30.966914 | orchestrator | + binding (known after apply) 2026-01-09 00:02:30.966919 | orchestrator | 2026-01-09 00:02:30.966922 | orchestrator | + fixed_ip { 2026-01-09 00:02:30.966926 | orchestrator | + ip_address = "192.168.16.15" 2026-01-09 00:02:30.966930 | orchestrator | + subnet_id = (known after apply) 2026-01-09 00:02:30.966934 | orchestrator | } 2026-01-09 00:02:30.966938 | orchestrator | } 2026-01-09 00:02:30.966985 | orchestrator | 2026-01-09 00:02:30.966996 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-09 00:02:30.967000 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-09 00:02:30.967004 | orchestrator | + force_destroy = false 2026-01-09 00:02:30.967008 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.967012 | orchestrator | + port_id = (known after apply) 2026-01-09 00:02:30.967016 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.967020 | orchestrator | + router_id = (known after apply) 2026-01-09 00:02:30.967023 | orchestrator | + subnet_id = (known after apply) 2026-01-09 00:02:30.967027 | orchestrator | } 2026-01-09 00:02:30.967113 | orchestrator | 2026-01-09 00:02:30.967124 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-09 00:02:30.967129 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-09 00:02:30.967132 | orchestrator | + admin_state_up = (known after apply) 2026-01-09 00:02:30.967136 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.967140 | orchestrator | + availability_zone_hints = [ 2026-01-09 00:02:30.967171 | orchestrator | + "nova", 2026-01-09 00:02:30.967176 | orchestrator | ] 2026-01-09 00:02:30.967180 | orchestrator | + distributed = (known after apply) 2026-01-09 00:02:30.967184 | orchestrator | + enable_snat = (known after apply) 2026-01-09 00:02:30.967188 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-09 00:02:30.967192 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-09 00:02:30.967196 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.967200 | orchestrator | + name = "testbed" 2026-01-09 00:02:30.967203 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.967207 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.967211 | orchestrator | 2026-01-09 00:02:30.967215 | orchestrator | + external_fixed_ip (known after apply) 2026-01-09 00:02:30.967219 | orchestrator | } 2026-01-09 00:02:30.967308 | orchestrator | 2026-01-09 00:02:30.967319 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-09 00:02:30.967325 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-09 00:02:30.967329 | orchestrator | + description = "ssh" 2026-01-09 00:02:30.967332 | orchestrator | + direction = "ingress" 2026-01-09 00:02:30.967336 | orchestrator | + ethertype = "IPv4" 2026-01-09 00:02:30.967340 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.967344 | orchestrator | + port_range_max = 22 2026-01-09 00:02:30.967348 | orchestrator | + port_range_min = 22 2026-01-09 00:02:30.967351 | orchestrator | + protocol = "tcp" 2026-01-09 00:02:30.967355 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.967364 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-09 00:02:30.967368 | orchestrator | + remote_group_id = (known after apply) 2026-01-09 00:02:30.967372 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-09 00:02:30.967376 | orchestrator | + security_group_id = (known after apply) 2026-01-09 00:02:30.967379 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.967383 | orchestrator | } 2026-01-09 00:02:30.967471 | orchestrator | 2026-01-09 00:02:30.967483 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-09 00:02:30.967487 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-09 00:02:30.967491 | orchestrator | + description = "wireguard" 2026-01-09 00:02:30.967495 | orchestrator | + direction = "ingress" 2026-01-09 00:02:30.967499 | orchestrator | + ethertype = "IPv4" 2026-01-09 00:02:30.967503 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.967507 | orchestrator | + port_range_max = 51820 2026-01-09 00:02:30.967510 | orchestrator | + port_range_min = 51820 2026-01-09 00:02:30.967514 | orchestrator | + protocol = "udp" 2026-01-09 00:02:30.967518 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.967522 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-09 00:02:30.967525 | orchestrator | + remote_group_id = (known after apply) 2026-01-09 00:02:30.967529 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-09 00:02:30.967533 | orchestrator | + security_group_id = (known after apply) 2026-01-09 00:02:30.967537 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.967541 | orchestrator | } 2026-01-09 00:02:30.967610 | orchestrator | 2026-01-09 00:02:30.967622 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-09 00:02:30.967626 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-09 00:02:30.967630 | orchestrator | + direction = "ingress" 2026-01-09 00:02:30.967634 | orchestrator | + ethertype = "IPv4" 2026-01-09 00:02:30.967638 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.967642 | orchestrator | + protocol = "tcp" 2026-01-09 00:02:30.967645 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.967649 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-09 00:02:30.967653 | orchestrator | + remote_group_id = (known after apply) 2026-01-09 00:02:30.967657 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-09 00:02:30.967661 | orchestrator | + security_group_id = (known after apply) 2026-01-09 00:02:30.967664 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.967668 | orchestrator | } 2026-01-09 00:02:30.967733 | orchestrator | 2026-01-09 00:02:30.967744 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-09 00:02:30.967749 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-09 00:02:30.967753 | orchestrator | + direction = "ingress" 2026-01-09 00:02:30.967757 | orchestrator | + ethertype = "IPv4" 2026-01-09 00:02:30.967761 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.967764 | orchestrator | + protocol = "udp" 2026-01-09 00:02:30.967768 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.967772 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-09 00:02:30.967776 | orchestrator | + remote_group_id = (known after apply) 2026-01-09 00:02:30.967779 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-09 00:02:30.967783 | orchestrator | + security_group_id = (known after apply) 2026-01-09 00:02:30.967787 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.967791 | orchestrator | } 2026-01-09 00:02:30.967858 | orchestrator | 2026-01-09 00:02:30.967869 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-09 00:02:30.967877 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-09 00:02:30.967881 | orchestrator | + direction = "ingress" 2026-01-09 00:02:30.967885 | orchestrator | + ethertype = "IPv4" 2026-01-09 00:02:30.967889 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.967893 | orchestrator | + protocol = "icmp" 2026-01-09 00:02:30.967897 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.967900 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-09 00:02:30.967904 | orchestrator | + remote_group_id = (known after apply) 2026-01-09 00:02:30.967908 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-09 00:02:30.967912 | orchestrator | + security_group_id = (known after apply) 2026-01-09 00:02:30.967916 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.967919 | orchestrator | } 2026-01-09 00:02:30.967982 | orchestrator | 2026-01-09 00:02:30.967993 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-09 00:02:30.967998 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-09 00:02:30.968001 | orchestrator | + direction = "ingress" 2026-01-09 00:02:30.968005 | orchestrator | + ethertype = "IPv4" 2026-01-09 00:02:30.968009 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.968013 | orchestrator | + protocol = "tcp" 2026-01-09 00:02:30.968017 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.968020 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-09 00:02:30.968028 | orchestrator | + remote_group_id = (known after apply) 2026-01-09 00:02:30.968031 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-09 00:02:30.968035 | orchestrator | + security_group_id = (known after apply) 2026-01-09 00:02:30.968039 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.968043 | orchestrator | } 2026-01-09 00:02:30.968112 | orchestrator | 2026-01-09 00:02:30.968123 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-09 00:02:30.968127 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-09 00:02:30.968131 | orchestrator | + direction = "ingress" 2026-01-09 00:02:30.968135 | orchestrator | + ethertype = "IPv4" 2026-01-09 00:02:30.968139 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.968143 | orchestrator | + protocol = "udp" 2026-01-09 00:02:30.968157 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.968161 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-09 00:02:30.968165 | orchestrator | + remote_group_id = (known after apply) 2026-01-09 00:02:30.968168 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-09 00:02:30.968172 | orchestrator | + security_group_id = (known after apply) 2026-01-09 00:02:30.968176 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.968180 | orchestrator | } 2026-01-09 00:02:30.968248 | orchestrator | 2026-01-09 00:02:30.968260 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-09 00:02:30.968264 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-09 00:02:30.968268 | orchestrator | + direction = "ingress" 2026-01-09 00:02:30.968275 | orchestrator | + ethertype = "IPv4" 2026-01-09 00:02:30.968279 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.968282 | orchestrator | + protocol = "icmp" 2026-01-09 00:02:30.968286 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.968290 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-09 00:02:30.968294 | orchestrator | + remote_group_id = (known after apply) 2026-01-09 00:02:30.968297 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-09 00:02:30.968301 | orchestrator | + security_group_id = (known after apply) 2026-01-09 00:02:30.968305 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.968312 | orchestrator | } 2026-01-09 00:02:30.968377 | orchestrator | 2026-01-09 00:02:30.968388 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-09 00:02:30.968393 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-09 00:02:30.968396 | orchestrator | + description = "vrrp" 2026-01-09 00:02:30.968400 | orchestrator | + direction = "ingress" 2026-01-09 00:02:30.968404 | orchestrator | + ethertype = "IPv4" 2026-01-09 00:02:30.968408 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.968412 | orchestrator | + protocol = "112" 2026-01-09 00:02:30.968415 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.968419 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-09 00:02:30.968423 | orchestrator | + remote_group_id = (known after apply) 2026-01-09 00:02:30.968427 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-09 00:02:30.968430 | orchestrator | + security_group_id = (known after apply) 2026-01-09 00:02:30.968434 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.968438 | orchestrator | } 2026-01-09 00:02:30.968489 | orchestrator | 2026-01-09 00:02:30.968500 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-09 00:02:30.968505 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-09 00:02:30.968508 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.968512 | orchestrator | + description = "management security group" 2026-01-09 00:02:30.968516 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.968520 | orchestrator | + name = "testbed-management" 2026-01-09 00:02:30.968523 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.968527 | orchestrator | + stateful = (known after apply) 2026-01-09 00:02:30.968531 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.968535 | orchestrator | } 2026-01-09 00:02:30.968581 | orchestrator | 2026-01-09 00:02:30.968592 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-09 00:02:30.968596 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-09 00:02:30.968600 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.968604 | orchestrator | + description = "node security group" 2026-01-09 00:02:30.968608 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.968612 | orchestrator | + name = "testbed-node" 2026-01-09 00:02:30.968616 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.968619 | orchestrator | + stateful = (known after apply) 2026-01-09 00:02:30.968623 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.968627 | orchestrator | } 2026-01-09 00:02:30.968733 | orchestrator | 2026-01-09 00:02:30.968744 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-09 00:02:30.968749 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-09 00:02:30.968752 | orchestrator | + all_tags = (known after apply) 2026-01-09 00:02:30.968756 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-09 00:02:30.968760 | orchestrator | + dns_nameservers = [ 2026-01-09 00:02:30.968764 | orchestrator | + "8.8.8.8", 2026-01-09 00:02:30.968768 | orchestrator | + "9.9.9.9", 2026-01-09 00:02:30.968772 | orchestrator | ] 2026-01-09 00:02:30.968775 | orchestrator | + enable_dhcp = true 2026-01-09 00:02:30.968779 | orchestrator | + gateway_ip = (known after apply) 2026-01-09 00:02:30.968783 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.968787 | orchestrator | + ip_version = 4 2026-01-09 00:02:30.968791 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-09 00:02:30.968795 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-09 00:02:30.968798 | orchestrator | + name = "subnet-testbed-management" 2026-01-09 00:02:30.968802 | orchestrator | + network_id = (known after apply) 2026-01-09 00:02:30.968806 | orchestrator | + no_gateway = false 2026-01-09 00:02:30.968810 | orchestrator | + region = (known after apply) 2026-01-09 00:02:30.968814 | orchestrator | + service_types = (known after apply) 2026-01-09 00:02:30.968821 | orchestrator | + tenant_id = (known after apply) 2026-01-09 00:02:30.968826 | orchestrator | 2026-01-09 00:02:30.968830 | orchestrator | + allocation_pool { 2026-01-09 00:02:30.968833 | orchestrator | + end = "192.168.31.250" 2026-01-09 00:02:30.968837 | orchestrator | + start = "192.168.31.200" 2026-01-09 00:02:30.968841 | orchestrator | } 2026-01-09 00:02:30.968845 | orchestrator | } 2026-01-09 00:02:30.968876 | orchestrator | 2026-01-09 00:02:30.968887 | orchestrator | # terraform_data.image will be created 2026-01-09 00:02:30.968891 | orchestrator | + resource "terraform_data" "image" { 2026-01-09 00:02:30.968895 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.968899 | orchestrator | + input = "Ubuntu 24.04" 2026-01-09 00:02:30.968903 | orchestrator | + output = (known after apply) 2026-01-09 00:02:30.968907 | orchestrator | } 2026-01-09 00:02:30.968943 | orchestrator | 2026-01-09 00:02:30.968954 | orchestrator | # terraform_data.image_node will be created 2026-01-09 00:02:30.968959 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-09 00:02:30.968962 | orchestrator | + id = (known after apply) 2026-01-09 00:02:30.968966 | orchestrator | + input = "Ubuntu 24.04" 2026-01-09 00:02:30.968970 | orchestrator | + output = (known after apply) 2026-01-09 00:02:30.968974 | orchestrator | } 2026-01-09 00:02:30.968997 | orchestrator | 2026-01-09 00:02:30.969002 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-09 00:02:30.969013 | orchestrator | 2026-01-09 00:02:30.969018 | orchestrator | Changes to Outputs: 2026-01-09 00:02:30.969029 | orchestrator | + manager_address = (sensitive value) 2026-01-09 00:02:30.969033 | orchestrator | + private_key = (sensitive value) 2026-01-09 00:02:31.209350 | orchestrator | terraform_data.image: Creating... 2026-01-09 00:02:31.209448 | orchestrator | terraform_data.image_node: Creating... 2026-01-09 00:02:31.209456 | orchestrator | terraform_data.image: Creation complete after 0s [id=329d902b-3f5d-dd97-ed8c-3a6f07b590c4] 2026-01-09 00:02:31.209670 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=be5a7c6f-782d-e008-41a1-b276b831438e] 2026-01-09 00:02:31.237673 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-09 00:02:31.238334 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-09 00:02:31.238507 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-09 00:02:31.239318 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-09 00:02:31.239553 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-09 00:02:31.240446 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-09 00:02:31.244885 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-09 00:02:31.245085 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-09 00:02:31.245365 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-09 00:02:31.245687 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-09 00:02:31.721238 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-09 00:02:31.725290 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-09 00:02:31.815965 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-09 00:02:31.819871 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-09 00:02:32.393871 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=bdb1b4c4-021d-4722-9e3d-4ae04fbc5661] 2026-01-09 00:02:32.396180 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-09 00:02:32.447747 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-09 00:02:32.454502 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-09 00:02:34.960706 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=34135356-9cda-41c5-bcd3-e499823abbc8] 2026-01-09 00:02:34.979262 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-09 00:02:34.983065 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=026602f7-e016-4389-ab85-d50ae4a6b766] 2026-01-09 00:02:34.986120 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=abb77c6e29792418313087f96b60358f91da248f] 2026-01-09 00:02:34.990849 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-09 00:02:35.002759 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-09 00:02:35.010802 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=f9fd9e1f-b101-43e5-b1f4-80d7cd19a338] 2026-01-09 00:02:35.019183 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-09 00:02:35.033616 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=cd74cca7-b2f5-447d-904c-402f09518541] 2026-01-09 00:02:35.037241 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-09 00:02:35.047180 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=2fbe7b7d-5687-429f-987a-2175aed9e795] 2026-01-09 00:02:35.051252 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-09 00:02:35.055258 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=a68cfd4f-f534-4fe8-b255-a5dba8df7f3e] 2026-01-09 00:02:35.067951 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d] 2026-01-09 00:02:35.068019 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-09 00:02:35.073867 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=49fd42726f2fdd278f1ea6a1e97a108b967fa981] 2026-01-09 00:02:35.074589 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-09 00:02:35.083991 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-09 00:02:35.102354 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=058c5952-7557-4cd3-b97b-610df2bea595] 2026-01-09 00:02:35.142238 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=e30b17a9-b87f-44a9-9e38-be5c8cfc2e88] 2026-01-09 00:02:35.874721 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=2993931b-e75c-4498-90d1-d6a3c3628430] 2026-01-09 00:02:36.164947 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=9683e61e-b663-4172-888d-6e779b2b25d9] 2026-01-09 00:02:36.177207 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-09 00:02:38.467025 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=a9326bfb-8d23-4106-9716-592566db0c6a] 2026-01-09 00:02:38.539864 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=a2569c57-bcfc-494d-b27b-8c89410faa4e] 2026-01-09 00:02:38.551964 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=1d2d4152-79ed-4225-a335-fd9605d1d2cf] 2026-01-09 00:02:38.567030 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=2ba11957-52c5-41d4-8e8d-198fe339e23b] 2026-01-09 00:02:38.573956 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=6662bdcf-027c-4645-b737-d68f8c08d7d7] 2026-01-09 00:02:38.682496 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=5392d7a2-7961-43a9-927c-41bebd27776d] 2026-01-09 00:02:38.866363 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=92d29ade-a888-457c-b67f-1ab2a4c22228] 2026-01-09 00:02:38.872323 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-09 00:02:38.872398 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-09 00:02:38.874783 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-09 00:02:39.093815 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=b53dc1e7-1092-44c1-b270-374d6df0035a] 2026-01-09 00:02:39.104185 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-09 00:02:39.105566 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-09 00:02:39.105659 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-09 00:02:39.106198 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-09 00:02:39.108889 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-09 00:02:39.111372 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-09 00:02:39.270146 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=0bd2e36a-85e4-4c69-a3ed-bacce2420049] 2026-01-09 00:02:39.442012 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=e4bfcf89-17c0-4ef6-8211-4a1f92efefbf] 2026-01-09 00:02:39.482051 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=78b72162-7261-4f4b-8c67-5d2f31d682c3] 2026-01-09 00:02:39.496482 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-09 00:02:39.496902 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-09 00:02:39.498819 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-09 00:02:39.499947 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-09 00:02:39.503707 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-09 00:02:39.640446 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=c8aaa715-0b76-4b5a-bc30-7925b42a8104] 2026-01-09 00:02:39.653311 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-09 00:02:40.005013 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=450eb3a2-7579-4407-8a84-8b9fe841264b] 2026-01-09 00:02:40.014455 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=bf694a19-b98e-45f1-80b6-5cc43b908624] 2026-01-09 00:02:40.017523 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-09 00:02:40.026787 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-09 00:02:40.325417 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=ca7693ff-8c0e-4d2c-810d-4d269beb4d4b] 2026-01-09 00:02:40.337070 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-09 00:02:40.352514 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=4ef19d0c-f44c-4289-9365-8a612ab8d0e4] 2026-01-09 00:02:40.359767 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-09 00:02:40.409391 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=6bd89052-a490-483b-b920-d44243de545f] 2026-01-09 00:02:40.632897 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=4d91b107-73f1-44e1-856c-ab3156d2326a] 2026-01-09 00:02:40.691859 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=feac9d6f-2a35-40b1-a1ba-588924d0c3cf] 2026-01-09 00:02:40.710615 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=60e17e8d-73a5-4118-8b45-3c131401ba30] 2026-01-09 00:02:40.819819 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=ec23588e-196d-4a09-bfc0-1c86e03ac667] 2026-01-09 00:02:40.946416 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=1241fa17-64dd-4859-89f2-11160ebbd85b] 2026-01-09 00:02:41.015291 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=be3adc56-df05-4137-897d-8582f1a4aa80] 2026-01-09 00:02:41.031996 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=6df6a43e-aefb-40fd-b47f-702547e5f797] 2026-01-09 00:02:41.085726 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=f35b8bfd-48e0-4aa2-8158-289afbb4e3d1] 2026-01-09 00:02:42.444002 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=2dd4a29a-2fba-497d-9604-4afeae5e78d3] 2026-01-09 00:02:42.461403 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-09 00:02:42.483756 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-09 00:02:42.484204 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-09 00:02:42.484957 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-09 00:02:42.487664 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-09 00:02:42.505796 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-09 00:02:42.511798 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-09 00:02:44.320127 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=765a565c-1fae-4c4d-9882-9cf35cd70cb8] 2026-01-09 00:02:44.336344 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-09 00:02:44.337542 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-09 00:02:44.339286 | orchestrator | local_file.inventory: Creating... 2026-01-09 00:02:44.344137 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=0053c0028b9e23bd772fb7fab20e3d8ea0b23f99] 2026-01-09 00:02:44.345017 | orchestrator | local_file.inventory: Creation complete after 0s [id=499dac3cdda57b1702b93a387cdc295b34033c0b] 2026-01-09 00:02:45.479081 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=765a565c-1fae-4c4d-9882-9cf35cd70cb8] 2026-01-09 00:02:52.487080 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-09 00:02:52.487257 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-09 00:02:52.487269 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-09 00:02:52.488185 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-09 00:02:52.506682 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-09 00:02:52.514215 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-09 00:03:02.490366 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-09 00:03:02.490456 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-09 00:03:02.490462 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-09 00:03:02.490473 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-09 00:03:02.507678 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-09 00:03:02.515004 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-09 00:03:03.119999 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=0e403c30-f0ea-422f-9199-6e01d66f7c3f] 2026-01-09 00:03:03.389933 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=9a3f74c4-6d05-4473-a2d0-e6f7c876c76e] 2026-01-09 00:03:12.498516 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-09 00:03:12.498662 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-09 00:03:12.498678 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-09 00:03:12.509024 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-09 00:03:13.224695 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=1e629ca0-0ada-42c7-960d-f601e273a96d] 2026-01-09 00:03:13.404966 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=2594110b-9cbb-4e64-abb2-61e646509c50] 2026-01-09 00:03:22.507320 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [41s elapsed] 2026-01-09 00:03:22.507473 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [41s elapsed] 2026-01-09 00:03:23.413521 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=d4a59e78-65af-468d-b25b-0e7b6cee0408] 2026-01-09 00:03:23.622223 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 42s [id=29e0f2e8-6bcd-4f6f-8820-138de56a5ebf] 2026-01-09 00:03:23.626744 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-09 00:03:23.635498 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-09 00:03:23.652993 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-09 00:03:23.660873 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-09 00:03:23.678110 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-09 00:03:23.689235 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=1545173303051157144] 2026-01-09 00:03:23.691691 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-09 00:03:23.692104 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-09 00:03:23.694781 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-09 00:03:23.697735 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-09 00:03:23.699697 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-09 00:03:23.725143 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-09 00:03:27.357206 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=29e0f2e8-6bcd-4f6f-8820-138de56a5ebf/2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d] 2026-01-09 00:03:27.377037 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=1e629ca0-0ada-42c7-960d-f601e273a96d/2fbe7b7d-5687-429f-987a-2175aed9e795] 2026-01-09 00:03:27.406623 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=0e403c30-f0ea-422f-9199-6e01d66f7c3f/058c5952-7557-4cd3-b97b-610df2bea595] 2026-01-09 00:03:29.354916 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=1e629ca0-0ada-42c7-960d-f601e273a96d/cd74cca7-b2f5-447d-904c-402f09518541] 2026-01-09 00:03:29.487995 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=1e629ca0-0ada-42c7-960d-f601e273a96d/a68cfd4f-f534-4fe8-b255-a5dba8df7f3e] 2026-01-09 00:03:29.830906 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=0e403c30-f0ea-422f-9199-6e01d66f7c3f/f9fd9e1f-b101-43e5-b1f4-80d7cd19a338] 2026-01-09 00:03:33.692587 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Still creating... [10s elapsed] 2026-01-09 00:03:33.700272 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Still creating... [10s elapsed] 2026-01-09 00:03:33.705668 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Still creating... [10s elapsed] 2026-01-09 00:03:33.726234 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-09 00:03:33.950283 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=29e0f2e8-6bcd-4f6f-8820-138de56a5ebf/34135356-9cda-41c5-bcd3-e499823abbc8] 2026-01-09 00:03:35.892983 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 12s [id=29e0f2e8-6bcd-4f6f-8820-138de56a5ebf/026602f7-e016-4389-ab85-d50ae4a6b766] 2026-01-09 00:03:36.097444 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 12s [id=0e403c30-f0ea-422f-9199-6e01d66f7c3f/e30b17a9-b87f-44a9-9e38-be5c8cfc2e88] 2026-01-09 00:03:43.727495 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-09 00:03:45.172342 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=16c4e789-0f75-410d-bd16-e2ddd34ff94b] 2026-01-09 00:03:45.334108 | orchestrator | 2026-01-09 00:03:45.334260 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-09 00:03:45.334312 | orchestrator | 2026-01-09 00:03:45.334324 | orchestrator | Outputs: 2026-01-09 00:03:45.334333 | orchestrator | 2026-01-09 00:03:45.334342 | orchestrator | manager_address = 2026-01-09 00:03:45.334350 | orchestrator | private_key = 2026-01-09 00:03:45.717998 | orchestrator | ok: Runtime: 0:01:20.391572 2026-01-09 00:03:45.739548 | 2026-01-09 00:03:45.739691 | TASK [Create infrastructure (stable)] 2026-01-09 00:03:46.283202 | orchestrator | skipping: Conditional result was False 2026-01-09 00:03:46.305658 | 2026-01-09 00:03:46.305870 | TASK [Fetch manager address] 2026-01-09 00:03:46.822525 | orchestrator | ok 2026-01-09 00:03:46.831811 | 2026-01-09 00:03:46.831950 | TASK [Set manager_host address] 2026-01-09 00:03:46.912749 | orchestrator | ok 2026-01-09 00:03:46.922740 | 2026-01-09 00:03:46.922924 | LOOP [Update ansible collections] 2026-01-09 00:03:50.053135 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-09 00:03:50.053479 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-09 00:03:50.053544 | orchestrator | Starting galaxy collection install process 2026-01-09 00:03:50.053584 | orchestrator | Process install dependency map 2026-01-09 00:03:50.053610 | orchestrator | Starting collection install process 2026-01-09 00:03:50.053633 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-01-09 00:03:50.053661 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-01-09 00:03:50.053698 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-09 00:03:50.053772 | orchestrator | ok: Item: commons Runtime: 0:00:02.707802 2026-01-09 00:03:52.189192 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-09 00:03:52.189420 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-09 00:03:52.189468 | orchestrator | Starting galaxy collection install process 2026-01-09 00:03:52.189502 | orchestrator | Process install dependency map 2026-01-09 00:03:52.189533 | orchestrator | Starting collection install process 2026-01-09 00:03:52.189562 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-01-09 00:03:52.189592 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-01-09 00:03:52.189620 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-09 00:03:52.189665 | orchestrator | ok: Item: services Runtime: 0:00:01.834979 2026-01-09 00:03:52.215625 | 2026-01-09 00:03:52.215786 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-09 00:04:03.700664 | orchestrator | ok 2026-01-09 00:04:03.709958 | 2026-01-09 00:04:03.710086 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-09 00:05:03.755949 | orchestrator | ok 2026-01-09 00:05:03.768906 | 2026-01-09 00:05:03.769079 | TASK [Fetch manager ssh hostkey] 2026-01-09 00:05:05.361363 | orchestrator | Output suppressed because no_log was given 2026-01-09 00:05:05.379124 | 2026-01-09 00:05:05.379340 | TASK [Get ssh keypair from terraform environment] 2026-01-09 00:05:05.921410 | orchestrator | ok: Runtime: 0:00:00.012025 2026-01-09 00:05:05.945898 | 2026-01-09 00:05:05.946083 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-09 00:05:05.990166 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-09 00:05:05.997635 | 2026-01-09 00:05:05.997765 | TASK [Run manager part 0] 2026-01-09 00:05:06.945842 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-09 00:05:07.018718 | orchestrator | 2026-01-09 00:05:07.018777 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-09 00:05:07.018787 | orchestrator | 2026-01-09 00:05:07.018803 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-09 00:05:09.072210 | orchestrator | ok: [testbed-manager] 2026-01-09 00:05:09.072320 | orchestrator | 2026-01-09 00:05:09.072351 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-09 00:05:09.072361 | orchestrator | 2026-01-09 00:05:09.072370 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-09 00:05:11.142962 | orchestrator | ok: [testbed-manager] 2026-01-09 00:05:11.143041 | orchestrator | 2026-01-09 00:05:11.143050 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-09 00:05:11.791007 | orchestrator | ok: [testbed-manager] 2026-01-09 00:05:11.791932 | orchestrator | 2026-01-09 00:05:11.791962 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-09 00:05:11.847267 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:05:11.847333 | orchestrator | 2026-01-09 00:05:11.847347 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-09 00:05:11.880506 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:05:11.880559 | orchestrator | 2026-01-09 00:05:11.880567 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-09 00:05:11.908918 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:05:11.908978 | orchestrator | 2026-01-09 00:05:11.908990 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-09 00:05:11.938321 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:05:11.938390 | orchestrator | 2026-01-09 00:05:11.938402 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-09 00:05:11.977667 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:05:11.977712 | orchestrator | 2026-01-09 00:05:11.977721 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-09 00:05:12.007902 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:05:12.007959 | orchestrator | 2026-01-09 00:05:12.007972 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-09 00:05:12.035304 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:05:12.035350 | orchestrator | 2026-01-09 00:05:12.035358 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-09 00:05:12.777823 | orchestrator | changed: [testbed-manager] 2026-01-09 00:05:12.777889 | orchestrator | 2026-01-09 00:05:12.777897 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-09 00:08:09.664352 | orchestrator | changed: [testbed-manager] 2026-01-09 00:08:09.665797 | orchestrator | 2026-01-09 00:08:09.665814 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-09 00:09:53.545114 | orchestrator | changed: [testbed-manager] 2026-01-09 00:09:53.545351 | orchestrator | 2026-01-09 00:09:53.545361 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-09 00:10:16.582325 | orchestrator | changed: [testbed-manager] 2026-01-09 00:10:16.582455 | orchestrator | 2026-01-09 00:10:16.582479 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-09 00:10:26.357741 | orchestrator | changed: [testbed-manager] 2026-01-09 00:10:26.357844 | orchestrator | 2026-01-09 00:10:26.357860 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-09 00:10:26.411882 | orchestrator | ok: [testbed-manager] 2026-01-09 00:10:26.411929 | orchestrator | 2026-01-09 00:10:26.411937 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-09 00:10:27.243835 | orchestrator | ok: [testbed-manager] 2026-01-09 00:10:27.243886 | orchestrator | 2026-01-09 00:10:27.243896 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-09 00:10:28.017648 | orchestrator | changed: [testbed-manager] 2026-01-09 00:10:28.017755 | orchestrator | 2026-01-09 00:10:28.017771 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-09 00:10:34.666268 | orchestrator | changed: [testbed-manager] 2026-01-09 00:10:34.666555 | orchestrator | 2026-01-09 00:10:34.666615 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-09 00:10:41.017624 | orchestrator | changed: [testbed-manager] 2026-01-09 00:10:41.017722 | orchestrator | 2026-01-09 00:10:41.017731 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-09 00:10:43.833478 | orchestrator | changed: [testbed-manager] 2026-01-09 00:10:43.833586 | orchestrator | 2026-01-09 00:10:43.833605 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-09 00:10:45.651161 | orchestrator | changed: [testbed-manager] 2026-01-09 00:10:45.651282 | orchestrator | 2026-01-09 00:10:45.651298 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-09 00:10:46.842065 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-09 00:10:46.842191 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-09 00:10:46.842198 | orchestrator | 2026-01-09 00:10:46.842204 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-09 00:10:46.880002 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-09 00:10:46.880089 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-09 00:10:46.880101 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-09 00:10:46.880108 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-09 00:10:52.175387 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-09 00:10:52.175515 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-09 00:10:52.175530 | orchestrator | 2026-01-09 00:10:52.175543 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-09 00:10:52.812934 | orchestrator | changed: [testbed-manager] 2026-01-09 00:10:52.813055 | orchestrator | 2026-01-09 00:10:52.813106 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-09 00:11:12.732202 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-09 00:11:12.732980 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-09 00:11:12.733086 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-09 00:11:12.733103 | orchestrator | 2026-01-09 00:11:12.733116 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-09 00:11:15.115504 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-09 00:11:15.115647 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-09 00:11:15.115675 | orchestrator | 2026-01-09 00:11:15.115698 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-09 00:11:15.115721 | orchestrator | 2026-01-09 00:11:15.115742 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-09 00:11:16.563318 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:16.563410 | orchestrator | 2026-01-09 00:11:16.563429 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-09 00:11:16.611663 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:16.611739 | orchestrator | 2026-01-09 00:11:16.611754 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-09 00:11:16.697970 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:16.698110 | orchestrator | 2026-01-09 00:11:16.698126 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-09 00:11:17.541542 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:17.541589 | orchestrator | 2026-01-09 00:11:17.541595 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-09 00:11:18.368538 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:18.368598 | orchestrator | 2026-01-09 00:11:18.368612 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-09 00:11:19.761989 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-09 00:11:19.762176 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-09 00:11:19.762202 | orchestrator | 2026-01-09 00:11:19.762243 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-09 00:11:21.259454 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:21.259594 | orchestrator | 2026-01-09 00:11:21.259612 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-09 00:11:23.073726 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-09 00:11:23.073838 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-09 00:11:23.073853 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-09 00:11:23.073866 | orchestrator | 2026-01-09 00:11:23.073879 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-09 00:11:23.135205 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:11:23.135261 | orchestrator | 2026-01-09 00:11:23.135270 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-09 00:11:23.205982 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:11:23.206142 | orchestrator | 2026-01-09 00:11:23.206157 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-09 00:11:23.796602 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:23.796722 | orchestrator | 2026-01-09 00:11:23.796739 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-09 00:11:23.869046 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:11:23.869166 | orchestrator | 2026-01-09 00:11:23.869183 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-09 00:11:24.792910 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-09 00:11:24.792959 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:24.792967 | orchestrator | 2026-01-09 00:11:24.792973 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-09 00:11:24.835560 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:11:24.835610 | orchestrator | 2026-01-09 00:11:24.835618 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-09 00:11:24.867288 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:11:24.867349 | orchestrator | 2026-01-09 00:11:24.867356 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-09 00:11:24.905979 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:11:24.906179 | orchestrator | 2026-01-09 00:11:24.906199 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-09 00:11:24.985791 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:11:24.985893 | orchestrator | 2026-01-09 00:11:24.985911 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-09 00:11:25.766820 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:25.766929 | orchestrator | 2026-01-09 00:11:25.766947 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-09 00:11:25.766961 | orchestrator | 2026-01-09 00:11:25.766972 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-09 00:11:27.230084 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:27.230305 | orchestrator | 2026-01-09 00:11:27.230324 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-09 00:11:28.244491 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:28.244600 | orchestrator | 2026-01-09 00:11:28.244618 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:11:28.244632 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-09 00:11:28.244644 | orchestrator | 2026-01-09 00:11:28.792422 | orchestrator | ok: Runtime: 0:06:22.038688 2026-01-09 00:11:28.817305 | 2026-01-09 00:11:28.817508 | TASK [Point out that the log in on the manager is now possible] 2026-01-09 00:11:28.855706 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-09 00:11:28.866599 | 2026-01-09 00:11:28.866761 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-09 00:11:28.899874 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-09 00:11:28.907479 | 2026-01-09 00:11:28.907604 | TASK [Run manager part 1 + 2] 2026-01-09 00:11:29.828973 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-09 00:11:29.893653 | orchestrator | 2026-01-09 00:11:29.893761 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-09 00:11:29.893781 | orchestrator | 2026-01-09 00:11:29.893813 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-09 00:11:32.554847 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:32.554980 | orchestrator | 2026-01-09 00:11:32.555090 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-09 00:11:32.602700 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:11:32.602800 | orchestrator | 2026-01-09 00:11:32.602821 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-09 00:11:32.653439 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:32.653525 | orchestrator | 2026-01-09 00:11:32.653544 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-09 00:11:32.702465 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:32.702576 | orchestrator | 2026-01-09 00:11:32.702595 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-09 00:11:32.779232 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:32.779323 | orchestrator | 2026-01-09 00:11:32.779343 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-09 00:11:32.858161 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:32.858217 | orchestrator | 2026-01-09 00:11:32.858225 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-09 00:11:32.902681 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-09 00:11:32.902727 | orchestrator | 2026-01-09 00:11:32.902733 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-09 00:11:33.632140 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:33.632214 | orchestrator | 2026-01-09 00:11:33.632227 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-09 00:11:33.682248 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:11:33.682317 | orchestrator | 2026-01-09 00:11:33.682326 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-09 00:11:35.145250 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:35.145428 | orchestrator | 2026-01-09 00:11:35.145436 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-09 00:11:35.724155 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:35.724198 | orchestrator | 2026-01-09 00:11:35.724206 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-09 00:11:36.889160 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:36.889210 | orchestrator | 2026-01-09 00:11:36.889221 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-09 00:11:52.451849 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:52.451934 | orchestrator | 2026-01-09 00:11:52.451952 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-09 00:11:53.125080 | orchestrator | ok: [testbed-manager] 2026-01-09 00:11:53.125164 | orchestrator | 2026-01-09 00:11:53.125182 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-09 00:11:53.177191 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:11:53.177266 | orchestrator | 2026-01-09 00:11:53.177280 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-09 00:11:54.167188 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:54.167641 | orchestrator | 2026-01-09 00:11:54.167671 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-09 00:11:55.191635 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:55.191762 | orchestrator | 2026-01-09 00:11:55.191789 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-09 00:11:55.795626 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:55.795745 | orchestrator | 2026-01-09 00:11:55.795764 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-09 00:11:55.837447 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-09 00:11:55.837521 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-09 00:11:55.837528 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-09 00:11:55.837533 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-09 00:11:58.893014 | orchestrator | changed: [testbed-manager] 2026-01-09 00:11:58.893138 | orchestrator | 2026-01-09 00:11:58.893166 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-09 00:12:08.371489 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-09 00:12:08.371542 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-09 00:12:08.371557 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-09 00:12:08.371570 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-09 00:12:08.371588 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-09 00:12:08.371727 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-09 00:12:08.371744 | orchestrator | 2026-01-09 00:12:08.371752 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-09 00:12:09.665472 | orchestrator | changed: [testbed-manager] 2026-01-09 00:12:09.665520 | orchestrator | 2026-01-09 00:12:09.665527 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-09 00:12:09.706513 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:12:09.706570 | orchestrator | 2026-01-09 00:12:09.706582 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-09 00:12:12.912205 | orchestrator | changed: [testbed-manager] 2026-01-09 00:12:12.912278 | orchestrator | 2026-01-09 00:12:12.912294 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-09 00:12:12.953025 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:12:12.953097 | orchestrator | 2026-01-09 00:12:12.953112 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-09 00:13:54.555439 | orchestrator | changed: [testbed-manager] 2026-01-09 00:13:54.555552 | orchestrator | 2026-01-09 00:13:54.555572 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-09 00:13:55.787624 | orchestrator | ok: [testbed-manager] 2026-01-09 00:13:55.787742 | orchestrator | 2026-01-09 00:13:55.787763 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:13:55.787778 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-09 00:13:55.787791 | orchestrator | 2026-01-09 00:13:56.059497 | orchestrator | ok: Runtime: 0:02:26.649141 2026-01-09 00:13:56.075595 | 2026-01-09 00:13:56.075775 | TASK [Reboot manager] 2026-01-09 00:13:57.679475 | orchestrator | ok: Runtime: 0:00:01.024403 2026-01-09 00:13:57.696452 | 2026-01-09 00:13:57.696602 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-09 00:14:14.241689 | orchestrator | ok 2026-01-09 00:14:14.249862 | 2026-01-09 00:14:14.249992 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-09 00:15:14.309675 | orchestrator | ok 2026-01-09 00:15:14.319573 | 2026-01-09 00:15:14.319734 | TASK [Deploy manager + bootstrap nodes] 2026-01-09 00:15:17.030657 | orchestrator | 2026-01-09 00:15:17.030921 | orchestrator | # DEPLOY MANAGER 2026-01-09 00:15:17.030946 | orchestrator | 2026-01-09 00:15:17.030959 | orchestrator | + set -e 2026-01-09 00:15:17.030971 | orchestrator | + echo 2026-01-09 00:15:17.030983 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-09 00:15:17.030999 | orchestrator | + echo 2026-01-09 00:15:17.031051 | orchestrator | + cat /opt/manager-vars.sh 2026-01-09 00:15:17.033005 | orchestrator | export NUMBER_OF_NODES=6 2026-01-09 00:15:17.033028 | orchestrator | 2026-01-09 00:15:17.033039 | orchestrator | export CEPH_VERSION=reef 2026-01-09 00:15:17.033052 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-09 00:15:17.033064 | orchestrator | export MANAGER_VERSION=latest 2026-01-09 00:15:17.033085 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-09 00:15:17.033095 | orchestrator | 2026-01-09 00:15:17.033112 | orchestrator | export ARA=false 2026-01-09 00:15:17.033122 | orchestrator | export DEPLOY_MODE=manager 2026-01-09 00:15:17.033139 | orchestrator | export TEMPEST=true 2026-01-09 00:15:17.033149 | orchestrator | export IS_ZUUL=true 2026-01-09 00:15:17.033159 | orchestrator | 2026-01-09 00:15:17.033176 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.67 2026-01-09 00:15:17.033186 | orchestrator | export EXTERNAL_API=false 2026-01-09 00:15:17.033196 | orchestrator | 2026-01-09 00:15:17.033206 | orchestrator | export IMAGE_USER=ubuntu 2026-01-09 00:15:17.033219 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-09 00:15:17.033229 | orchestrator | 2026-01-09 00:15:17.033239 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-09 00:15:17.033407 | orchestrator | 2026-01-09 00:15:17.033424 | orchestrator | + echo 2026-01-09 00:15:17.033434 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-09 00:15:17.034279 | orchestrator | ++ export INTERACTIVE=false 2026-01-09 00:15:17.034298 | orchestrator | ++ INTERACTIVE=false 2026-01-09 00:15:17.034307 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-09 00:15:17.034316 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-09 00:15:17.034565 | orchestrator | + source /opt/manager-vars.sh 2026-01-09 00:15:17.034577 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-09 00:15:17.034585 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-09 00:15:17.034656 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-09 00:15:17.034668 | orchestrator | ++ CEPH_VERSION=reef 2026-01-09 00:15:17.034676 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-09 00:15:17.034685 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-09 00:15:17.034693 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-09 00:15:17.034701 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-09 00:15:17.034709 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-09 00:15:17.034725 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-09 00:15:17.034738 | orchestrator | ++ export ARA=false 2026-01-09 00:15:17.034746 | orchestrator | ++ ARA=false 2026-01-09 00:15:17.034754 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-09 00:15:17.034763 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-09 00:15:17.034774 | orchestrator | ++ export TEMPEST=true 2026-01-09 00:15:17.034783 | orchestrator | ++ TEMPEST=true 2026-01-09 00:15:17.034791 | orchestrator | ++ export IS_ZUUL=true 2026-01-09 00:15:17.034799 | orchestrator | ++ IS_ZUUL=true 2026-01-09 00:15:17.034807 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.67 2026-01-09 00:15:17.035059 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.67 2026-01-09 00:15:17.035079 | orchestrator | ++ export EXTERNAL_API=false 2026-01-09 00:15:17.035087 | orchestrator | ++ EXTERNAL_API=false 2026-01-09 00:15:17.035095 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-09 00:15:17.035103 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-09 00:15:17.035111 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-09 00:15:17.035119 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-09 00:15:17.035182 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-09 00:15:17.035193 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-09 00:15:17.035201 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-09 00:15:17.084554 | orchestrator | + docker version 2026-01-09 00:15:17.376100 | orchestrator | Client: Docker Engine - Community 2026-01-09 00:15:17.376198 | orchestrator | Version: 27.5.1 2026-01-09 00:15:17.376215 | orchestrator | API version: 1.47 2026-01-09 00:15:17.376229 | orchestrator | Go version: go1.22.11 2026-01-09 00:15:17.376241 | orchestrator | Git commit: 9f9e405 2026-01-09 00:15:17.376253 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-09 00:15:17.376265 | orchestrator | OS/Arch: linux/amd64 2026-01-09 00:15:17.376276 | orchestrator | Context: default 2026-01-09 00:15:17.376287 | orchestrator | 2026-01-09 00:15:17.376298 | orchestrator | Server: Docker Engine - Community 2026-01-09 00:15:17.376310 | orchestrator | Engine: 2026-01-09 00:15:17.376333 | orchestrator | Version: 27.5.1 2026-01-09 00:15:17.376346 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-09 00:15:17.376389 | orchestrator | Go version: go1.22.11 2026-01-09 00:15:17.376402 | orchestrator | Git commit: 4c9b3b0 2026-01-09 00:15:17.376412 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-09 00:15:17.376423 | orchestrator | OS/Arch: linux/amd64 2026-01-09 00:15:17.376434 | orchestrator | Experimental: false 2026-01-09 00:15:17.376445 | orchestrator | containerd: 2026-01-09 00:15:17.376456 | orchestrator | Version: v2.2.1 2026-01-09 00:15:17.376468 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-09 00:15:17.376479 | orchestrator | runc: 2026-01-09 00:15:17.376490 | orchestrator | Version: 1.3.4 2026-01-09 00:15:17.376501 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-09 00:15:17.376512 | orchestrator | docker-init: 2026-01-09 00:15:17.376523 | orchestrator | Version: 0.19.0 2026-01-09 00:15:17.376536 | orchestrator | GitCommit: de40ad0 2026-01-09 00:15:17.379419 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-09 00:15:17.389302 | orchestrator | + set -e 2026-01-09 00:15:17.389337 | orchestrator | + source /opt/manager-vars.sh 2026-01-09 00:15:17.389345 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-09 00:15:17.389352 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-09 00:15:17.389359 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-09 00:15:17.389366 | orchestrator | ++ CEPH_VERSION=reef 2026-01-09 00:15:17.389374 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-09 00:15:17.389381 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-09 00:15:17.389388 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-09 00:15:17.389395 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-09 00:15:17.389402 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-09 00:15:17.389408 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-09 00:15:17.389415 | orchestrator | ++ export ARA=false 2026-01-09 00:15:17.389422 | orchestrator | ++ ARA=false 2026-01-09 00:15:17.389429 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-09 00:15:17.389436 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-09 00:15:17.389443 | orchestrator | ++ export TEMPEST=true 2026-01-09 00:15:17.389450 | orchestrator | ++ TEMPEST=true 2026-01-09 00:15:17.389456 | orchestrator | ++ export IS_ZUUL=true 2026-01-09 00:15:17.389463 | orchestrator | ++ IS_ZUUL=true 2026-01-09 00:15:17.389470 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.67 2026-01-09 00:15:17.389476 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.67 2026-01-09 00:15:17.389483 | orchestrator | ++ export EXTERNAL_API=false 2026-01-09 00:15:17.389490 | orchestrator | ++ EXTERNAL_API=false 2026-01-09 00:15:17.389496 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-09 00:15:17.389503 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-09 00:15:17.389510 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-09 00:15:17.389517 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-09 00:15:17.389529 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-09 00:15:17.389536 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-09 00:15:17.389543 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-09 00:15:17.389549 | orchestrator | ++ export INTERACTIVE=false 2026-01-09 00:15:17.389556 | orchestrator | ++ INTERACTIVE=false 2026-01-09 00:15:17.389563 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-09 00:15:17.389573 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-09 00:15:17.389661 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-09 00:15:17.389671 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-09 00:15:17.389713 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-09 00:15:17.397997 | orchestrator | + set -e 2026-01-09 00:15:17.398064 | orchestrator | + VERSION=reef 2026-01-09 00:15:17.398301 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-09 00:15:17.407442 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-09 00:15:17.407520 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-09 00:15:17.412455 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-01-09 00:15:17.418715 | orchestrator | + set -e 2026-01-09 00:15:17.418753 | orchestrator | + VERSION=2024.2 2026-01-09 00:15:17.419733 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-09 00:15:17.423837 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-09 00:15:17.423878 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-01-09 00:15:17.430597 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-09 00:15:17.431578 | orchestrator | ++ semver latest 7.0.0 2026-01-09 00:15:17.504435 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-09 00:15:17.504509 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-09 00:15:17.504517 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-09 00:15:17.505375 | orchestrator | ++ semver latest 10.0.0-0 2026-01-09 00:15:17.562773 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-09 00:15:17.564021 | orchestrator | ++ semver 2024.2 2025.1 2026-01-09 00:15:17.628360 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-09 00:15:17.628425 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-09 00:15:17.729422 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-09 00:15:17.730704 | orchestrator | + source /opt/venv/bin/activate 2026-01-09 00:15:17.731975 | orchestrator | ++ deactivate nondestructive 2026-01-09 00:15:17.731985 | orchestrator | ++ '[' -n '' ']' 2026-01-09 00:15:17.731989 | orchestrator | ++ '[' -n '' ']' 2026-01-09 00:15:17.732059 | orchestrator | ++ hash -r 2026-01-09 00:15:17.732064 | orchestrator | ++ '[' -n '' ']' 2026-01-09 00:15:17.732069 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-09 00:15:17.732135 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-09 00:15:17.732143 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-09 00:15:17.732267 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-09 00:15:17.732273 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-09 00:15:17.732306 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-09 00:15:17.732311 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-09 00:15:17.732354 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-09 00:15:17.732360 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-09 00:15:17.732364 | orchestrator | ++ export PATH 2026-01-09 00:15:17.732370 | orchestrator | ++ '[' -n '' ']' 2026-01-09 00:15:17.732432 | orchestrator | ++ '[' -z '' ']' 2026-01-09 00:15:17.732488 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-09 00:15:17.732538 | orchestrator | ++ PS1='(venv) ' 2026-01-09 00:15:17.732543 | orchestrator | ++ export PS1 2026-01-09 00:15:17.732547 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-09 00:15:17.732551 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-09 00:15:17.732555 | orchestrator | ++ hash -r 2026-01-09 00:15:17.732689 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-09 00:15:19.212277 | orchestrator | 2026-01-09 00:15:19.212369 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-09 00:15:19.212377 | orchestrator | 2026-01-09 00:15:19.212381 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-09 00:15:19.805828 | orchestrator | ok: [testbed-manager] 2026-01-09 00:15:19.805926 | orchestrator | 2026-01-09 00:15:19.805935 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-09 00:15:20.829690 | orchestrator | changed: [testbed-manager] 2026-01-09 00:15:20.829815 | orchestrator | 2026-01-09 00:15:20.829833 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-09 00:15:20.829846 | orchestrator | 2026-01-09 00:15:20.829939 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-09 00:15:23.307657 | orchestrator | ok: [testbed-manager] 2026-01-09 00:15:23.307784 | orchestrator | 2026-01-09 00:15:23.307801 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-09 00:15:23.351908 | orchestrator | ok: [testbed-manager] 2026-01-09 00:15:23.351996 | orchestrator | 2026-01-09 00:15:23.352004 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-09 00:15:23.822343 | orchestrator | changed: [testbed-manager] 2026-01-09 00:15:23.822450 | orchestrator | 2026-01-09 00:15:23.822461 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-09 00:15:23.865719 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:15:23.865801 | orchestrator | 2026-01-09 00:15:23.865807 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-09 00:15:24.217675 | orchestrator | changed: [testbed-manager] 2026-01-09 00:15:24.217765 | orchestrator | 2026-01-09 00:15:24.217773 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-09 00:15:24.275558 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:15:24.275636 | orchestrator | 2026-01-09 00:15:24.275642 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-09 00:15:24.616317 | orchestrator | ok: [testbed-manager] 2026-01-09 00:15:24.616466 | orchestrator | 2026-01-09 00:15:24.616487 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-09 00:15:24.754598 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:15:24.754716 | orchestrator | 2026-01-09 00:15:24.754734 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-09 00:15:24.754748 | orchestrator | 2026-01-09 00:15:24.754760 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-09 00:15:26.547519 | orchestrator | ok: [testbed-manager] 2026-01-09 00:15:26.547646 | orchestrator | 2026-01-09 00:15:26.547663 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-09 00:15:26.647113 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-09 00:15:26.647242 | orchestrator | 2026-01-09 00:15:26.647259 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-09 00:15:26.708020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-09 00:15:26.708129 | orchestrator | 2026-01-09 00:15:26.708145 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-09 00:15:27.847658 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-09 00:15:27.847777 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-09 00:15:27.847793 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-09 00:15:27.847806 | orchestrator | 2026-01-09 00:15:27.847818 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-09 00:15:29.676518 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-09 00:15:29.676642 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-09 00:15:29.676661 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-09 00:15:29.676674 | orchestrator | 2026-01-09 00:15:29.676686 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-09 00:15:30.384178 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-09 00:15:30.384292 | orchestrator | changed: [testbed-manager] 2026-01-09 00:15:30.384310 | orchestrator | 2026-01-09 00:15:30.384323 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-09 00:15:31.039606 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-09 00:15:31.039724 | orchestrator | changed: [testbed-manager] 2026-01-09 00:15:31.039742 | orchestrator | 2026-01-09 00:15:31.039756 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-09 00:15:31.098339 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:15:31.098445 | orchestrator | 2026-01-09 00:15:31.098461 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-09 00:15:31.486931 | orchestrator | ok: [testbed-manager] 2026-01-09 00:15:31.487049 | orchestrator | 2026-01-09 00:15:31.487067 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-09 00:15:31.571568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-09 00:15:31.571691 | orchestrator | 2026-01-09 00:15:31.571708 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-09 00:15:32.714327 | orchestrator | changed: [testbed-manager] 2026-01-09 00:15:32.714407 | orchestrator | 2026-01-09 00:15:32.714419 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-09 00:15:33.627196 | orchestrator | changed: [testbed-manager] 2026-01-09 00:15:33.627307 | orchestrator | 2026-01-09 00:15:33.627324 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-09 00:15:52.458277 | orchestrator | changed: [testbed-manager] 2026-01-09 00:15:52.458390 | orchestrator | 2026-01-09 00:15:52.458408 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-09 00:15:52.529201 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:15:52.529315 | orchestrator | 2026-01-09 00:15:52.529332 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-09 00:15:52.529346 | orchestrator | 2026-01-09 00:15:52.529390 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-09 00:15:54.458794 | orchestrator | ok: [testbed-manager] 2026-01-09 00:15:54.459000 | orchestrator | 2026-01-09 00:15:54.459029 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-09 00:15:54.565549 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-09 00:15:54.565659 | orchestrator | 2026-01-09 00:15:54.565675 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-09 00:15:54.629252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-09 00:15:54.629385 | orchestrator | 2026-01-09 00:15:54.629412 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-09 00:15:57.402783 | orchestrator | ok: [testbed-manager] 2026-01-09 00:15:57.402995 | orchestrator | 2026-01-09 00:15:57.403025 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-09 00:15:57.453107 | orchestrator | ok: [testbed-manager] 2026-01-09 00:15:57.453241 | orchestrator | 2026-01-09 00:15:57.453260 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-09 00:15:57.597070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-09 00:15:57.597200 | orchestrator | 2026-01-09 00:15:57.597217 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-09 00:16:00.527989 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-09 00:16:00.528106 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-09 00:16:00.528122 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-09 00:16:00.528135 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-09 00:16:00.528146 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-09 00:16:00.528157 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-09 00:16:00.528168 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-09 00:16:00.528180 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-09 00:16:00.528191 | orchestrator | 2026-01-09 00:16:00.528204 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-09 00:16:01.168756 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:01.168907 | orchestrator | 2026-01-09 00:16:01.168926 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-09 00:16:01.845818 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:01.845951 | orchestrator | 2026-01-09 00:16:01.845964 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-09 00:16:01.930790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-09 00:16:01.930945 | orchestrator | 2026-01-09 00:16:01.930962 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-09 00:16:03.212536 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-09 00:16:03.212663 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-09 00:16:03.212679 | orchestrator | 2026-01-09 00:16:03.212693 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-09 00:16:03.860906 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:03.861018 | orchestrator | 2026-01-09 00:16:03.861034 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-09 00:16:03.918458 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:16:03.918559 | orchestrator | 2026-01-09 00:16:03.918576 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-09 00:16:03.992415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-09 00:16:03.992534 | orchestrator | 2026-01-09 00:16:03.992550 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-09 00:16:04.607599 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:04.607766 | orchestrator | 2026-01-09 00:16:04.607819 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-09 00:16:04.679241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-09 00:16:04.679337 | orchestrator | 2026-01-09 00:16:04.679349 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-09 00:16:06.113542 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-09 00:16:06.113655 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-09 00:16:06.113672 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:06.113686 | orchestrator | 2026-01-09 00:16:06.113698 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-09 00:16:06.791079 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:06.791195 | orchestrator | 2026-01-09 00:16:06.791212 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-09 00:16:06.847689 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:16:06.847794 | orchestrator | 2026-01-09 00:16:06.847824 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-09 00:16:06.950797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-09 00:16:06.950946 | orchestrator | 2026-01-09 00:16:06.950986 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-09 00:16:07.496964 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:07.497085 | orchestrator | 2026-01-09 00:16:07.497103 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-09 00:16:07.945303 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:07.945417 | orchestrator | 2026-01-09 00:16:07.945434 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-09 00:16:09.187564 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-09 00:16:09.187690 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-09 00:16:09.187707 | orchestrator | 2026-01-09 00:16:09.187720 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-09 00:16:09.856294 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:09.856395 | orchestrator | 2026-01-09 00:16:09.856408 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-09 00:16:10.271898 | orchestrator | ok: [testbed-manager] 2026-01-09 00:16:10.272017 | orchestrator | 2026-01-09 00:16:10.272034 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-09 00:16:10.654707 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:10.654908 | orchestrator | 2026-01-09 00:16:10.654946 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-09 00:16:10.705975 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:16:10.706123 | orchestrator | 2026-01-09 00:16:10.706137 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-09 00:16:10.783447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-09 00:16:10.783522 | orchestrator | 2026-01-09 00:16:10.783529 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-09 00:16:10.840696 | orchestrator | ok: [testbed-manager] 2026-01-09 00:16:10.840788 | orchestrator | 2026-01-09 00:16:10.840804 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-09 00:16:12.924673 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-09 00:16:12.924798 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-09 00:16:12.924814 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-09 00:16:12.924826 | orchestrator | 2026-01-09 00:16:12.924839 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-09 00:16:13.669274 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:13.669397 | orchestrator | 2026-01-09 00:16:13.669416 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-09 00:16:14.415135 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:14.415287 | orchestrator | 2026-01-09 00:16:14.415322 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-09 00:16:15.141099 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:15.141215 | orchestrator | 2026-01-09 00:16:15.141233 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-09 00:16:15.202492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-09 00:16:15.202593 | orchestrator | 2026-01-09 00:16:15.202605 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-09 00:16:15.248582 | orchestrator | ok: [testbed-manager] 2026-01-09 00:16:15.248686 | orchestrator | 2026-01-09 00:16:15.248700 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-09 00:16:15.997432 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-09 00:16:15.997544 | orchestrator | 2026-01-09 00:16:15.997561 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-09 00:16:16.082361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-09 00:16:16.082475 | orchestrator | 2026-01-09 00:16:16.082493 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-09 00:16:16.810136 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:16.810233 | orchestrator | 2026-01-09 00:16:16.810245 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-09 00:16:17.442473 | orchestrator | ok: [testbed-manager] 2026-01-09 00:16:17.442602 | orchestrator | 2026-01-09 00:16:17.442621 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-09 00:16:17.499264 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:16:17.499355 | orchestrator | 2026-01-09 00:16:17.499367 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-09 00:16:17.559678 | orchestrator | ok: [testbed-manager] 2026-01-09 00:16:17.559781 | orchestrator | 2026-01-09 00:16:17.559799 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-09 00:16:18.426938 | orchestrator | changed: [testbed-manager] 2026-01-09 00:16:18.427057 | orchestrator | 2026-01-09 00:16:18.427075 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-09 00:17:29.018869 | orchestrator | changed: [testbed-manager] 2026-01-09 00:17:29.019063 | orchestrator | 2026-01-09 00:17:29.019090 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-09 00:17:29.986786 | orchestrator | ok: [testbed-manager] 2026-01-09 00:17:29.986868 | orchestrator | 2026-01-09 00:17:29.986875 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-09 00:17:30.046772 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:17:30.046969 | orchestrator | 2026-01-09 00:17:30.047001 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-09 00:17:33.133222 | orchestrator | changed: [testbed-manager] 2026-01-09 00:17:33.133382 | orchestrator | 2026-01-09 00:17:33.133463 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-09 00:17:33.203646 | orchestrator | ok: [testbed-manager] 2026-01-09 00:17:33.203776 | orchestrator | 2026-01-09 00:17:33.203795 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-09 00:17:33.203810 | orchestrator | 2026-01-09 00:17:33.203822 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-09 00:17:33.250988 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:17:33.251087 | orchestrator | 2026-01-09 00:17:33.251101 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-09 00:18:33.313413 | orchestrator | Pausing for 60 seconds 2026-01-09 00:18:33.313542 | orchestrator | changed: [testbed-manager] 2026-01-09 00:18:33.313562 | orchestrator | 2026-01-09 00:18:33.313577 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-09 00:18:36.337551 | orchestrator | changed: [testbed-manager] 2026-01-09 00:18:36.337700 | orchestrator | 2026-01-09 00:18:36.337722 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-09 00:19:38.455833 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-09 00:19:38.456100 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-09 00:19:38.456120 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-09 00:19:38.456132 | orchestrator | changed: [testbed-manager] 2026-01-09 00:19:38.456146 | orchestrator | 2026-01-09 00:19:38.456158 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-09 00:19:49.758756 | orchestrator | changed: [testbed-manager] 2026-01-09 00:19:49.758933 | orchestrator | 2026-01-09 00:19:49.758955 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-09 00:19:49.838944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-09 00:19:49.839053 | orchestrator | 2026-01-09 00:19:49.839068 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-09 00:19:49.839081 | orchestrator | 2026-01-09 00:19:49.839092 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-09 00:19:49.908569 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:19:49.908666 | orchestrator | 2026-01-09 00:19:49.908677 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-09 00:19:49.977605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-09 00:19:49.977723 | orchestrator | 2026-01-09 00:19:49.977740 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-09 00:19:50.798511 | orchestrator | changed: [testbed-manager] 2026-01-09 00:19:50.798643 | orchestrator | 2026-01-09 00:19:50.798671 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-09 00:19:54.157259 | orchestrator | ok: [testbed-manager] 2026-01-09 00:19:54.157357 | orchestrator | 2026-01-09 00:19:54.157374 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-09 00:19:54.220745 | orchestrator | ok: [testbed-manager] => { 2026-01-09 00:19:54.220830 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-09 00:19:54.220880 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-09 00:19:54.220894 | orchestrator | "Checking running containers against expected versions...", 2026-01-09 00:19:54.220906 | orchestrator | "", 2026-01-09 00:19:54.220918 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-09 00:19:54.220930 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-09 00:19:54.220941 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.220952 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-09 00:19:54.220964 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.220975 | orchestrator | "", 2026-01-09 00:19:54.220986 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-09 00:19:54.220997 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-09 00:19:54.221008 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221019 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-09 00:19:54.221029 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221040 | orchestrator | "", 2026-01-09 00:19:54.221051 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-09 00:19:54.221062 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-09 00:19:54.221073 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221084 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-09 00:19:54.221095 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221106 | orchestrator | "", 2026-01-09 00:19:54.221117 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-09 00:19:54.221128 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-09 00:19:54.221139 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221150 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-09 00:19:54.221188 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221199 | orchestrator | "", 2026-01-09 00:19:54.221211 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-09 00:19:54.221221 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-09 00:19:54.221232 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221246 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-09 00:19:54.221267 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221288 | orchestrator | "", 2026-01-09 00:19:54.221308 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-09 00:19:54.221333 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.221359 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221380 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.221400 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221423 | orchestrator | "", 2026-01-09 00:19:54.221443 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-09 00:19:54.221462 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-09 00:19:54.221475 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221487 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-09 00:19:54.221500 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221512 | orchestrator | "", 2026-01-09 00:19:54.221524 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-09 00:19:54.221537 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-09 00:19:54.221550 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221572 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-09 00:19:54.221590 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221603 | orchestrator | "", 2026-01-09 00:19:54.221616 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-09 00:19:54.221629 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-09 00:19:54.221641 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221654 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-09 00:19:54.221664 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221675 | orchestrator | "", 2026-01-09 00:19:54.221686 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-09 00:19:54.221696 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-09 00:19:54.221707 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221718 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-09 00:19:54.221728 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221739 | orchestrator | "", 2026-01-09 00:19:54.221750 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-09 00:19:54.221761 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.221772 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221782 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.221793 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221804 | orchestrator | "", 2026-01-09 00:19:54.221814 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-09 00:19:54.221825 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.221836 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221866 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.221878 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221889 | orchestrator | "", 2026-01-09 00:19:54.221899 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-09 00:19:54.221910 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.221921 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.221931 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.221942 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.221953 | orchestrator | "", 2026-01-09 00:19:54.221963 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-09 00:19:54.221985 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.221996 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.222007 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.222067 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.222079 | orchestrator | "", 2026-01-09 00:19:54.222090 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-09 00:19:54.222118 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.222129 | orchestrator | " Enabled: true", 2026-01-09 00:19:54.222140 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-09 00:19:54.222151 | orchestrator | " Status: ✅ MATCH", 2026-01-09 00:19:54.222162 | orchestrator | "", 2026-01-09 00:19:54.222173 | orchestrator | "=== Summary ===", 2026-01-09 00:19:54.222183 | orchestrator | "Errors (version mismatches): 0", 2026-01-09 00:19:54.222194 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-09 00:19:54.222204 | orchestrator | "", 2026-01-09 00:19:54.222215 | orchestrator | "✅ All running containers match expected versions!" 2026-01-09 00:19:54.222226 | orchestrator | ] 2026-01-09 00:19:54.222237 | orchestrator | } 2026-01-09 00:19:54.222248 | orchestrator | 2026-01-09 00:19:54.222259 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-09 00:19:54.287509 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:19:54.287602 | orchestrator | 2026-01-09 00:19:54.287620 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:19:54.287634 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-09 00:19:54.287645 | orchestrator | 2026-01-09 00:19:54.428541 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-09 00:19:54.428629 | orchestrator | + deactivate 2026-01-09 00:19:54.428646 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-09 00:19:54.428660 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-09 00:19:54.428672 | orchestrator | + export PATH 2026-01-09 00:19:54.428684 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-09 00:19:54.428696 | orchestrator | + '[' -n '' ']' 2026-01-09 00:19:54.428708 | orchestrator | + hash -r 2026-01-09 00:19:54.428719 | orchestrator | + '[' -n '' ']' 2026-01-09 00:19:54.428730 | orchestrator | + unset VIRTUAL_ENV 2026-01-09 00:19:54.428742 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-09 00:19:54.428753 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-09 00:19:54.428765 | orchestrator | + unset -f deactivate 2026-01-09 00:19:54.428776 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-09 00:19:54.438272 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-09 00:19:54.438323 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-09 00:19:54.438338 | orchestrator | + local max_attempts=60 2026-01-09 00:19:54.438352 | orchestrator | + local name=ceph-ansible 2026-01-09 00:19:54.438364 | orchestrator | + local attempt_num=1 2026-01-09 00:19:54.439238 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:19:54.478455 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:19:54.478520 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-09 00:19:54.478533 | orchestrator | + local max_attempts=60 2026-01-09 00:19:54.478544 | orchestrator | + local name=kolla-ansible 2026-01-09 00:19:54.478554 | orchestrator | + local attempt_num=1 2026-01-09 00:19:54.478952 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-09 00:19:54.516133 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:19:54.516214 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-09 00:19:54.516229 | orchestrator | + local max_attempts=60 2026-01-09 00:19:54.516241 | orchestrator | + local name=osism-ansible 2026-01-09 00:19:54.516253 | orchestrator | + local attempt_num=1 2026-01-09 00:19:54.516895 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-09 00:19:54.558271 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:19:54.558358 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-09 00:19:54.558380 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-09 00:19:55.250226 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-09 00:19:55.426430 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-09 00:19:55.426496 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-09 00:19:55.426507 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-09 00:19:55.426515 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-09 00:19:55.426525 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-09 00:19:55.426533 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-09 00:19:55.426541 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-09 00:19:55.426549 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-09 00:19:55.426571 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-09 00:19:55.426579 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-09 00:19:55.426587 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-09 00:19:55.426594 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-09 00:19:55.426601 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-09 00:19:55.426609 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-09 00:19:55.426616 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-09 00:19:55.426624 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-09 00:19:55.435705 | orchestrator | ++ semver latest 7.0.0 2026-01-09 00:19:55.475407 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-09 00:19:55.475479 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-09 00:19:55.475493 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-09 00:19:55.480557 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-09 00:20:07.887399 | orchestrator | 2026-01-09 00:20:07 | INFO  | Task c6d1dd72-6014-4eeb-86b8-605108b21602 (resolvconf) was prepared for execution. 2026-01-09 00:20:07.887548 | orchestrator | 2026-01-09 00:20:07 | INFO  | It takes a moment until task c6d1dd72-6014-4eeb-86b8-605108b21602 (resolvconf) has been started and output is visible here. 2026-01-09 00:20:22.415401 | orchestrator | 2026-01-09 00:20:22.415531 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-09 00:20:22.415549 | orchestrator | 2026-01-09 00:20:22.415561 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-09 00:20:22.415573 | orchestrator | Friday 09 January 2026 00:20:12 +0000 (0:00:00.154) 0:00:00.154 ******** 2026-01-09 00:20:22.415584 | orchestrator | ok: [testbed-manager] 2026-01-09 00:20:22.415597 | orchestrator | 2026-01-09 00:20:22.415608 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-09 00:20:22.415620 | orchestrator | Friday 09 January 2026 00:20:16 +0000 (0:00:03.969) 0:00:04.124 ******** 2026-01-09 00:20:22.415631 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:20:22.415643 | orchestrator | 2026-01-09 00:20:22.415654 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-09 00:20:22.415665 | orchestrator | Friday 09 January 2026 00:20:16 +0000 (0:00:00.071) 0:00:04.195 ******** 2026-01-09 00:20:22.415676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-09 00:20:22.415689 | orchestrator | 2026-01-09 00:20:22.415700 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-09 00:20:22.415711 | orchestrator | Friday 09 January 2026 00:20:16 +0000 (0:00:00.106) 0:00:04.302 ******** 2026-01-09 00:20:22.415722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-09 00:20:22.415733 | orchestrator | 2026-01-09 00:20:22.415744 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-09 00:20:22.415767 | orchestrator | Friday 09 January 2026 00:20:16 +0000 (0:00:00.077) 0:00:04.379 ******** 2026-01-09 00:20:22.415779 | orchestrator | ok: [testbed-manager] 2026-01-09 00:20:22.415790 | orchestrator | 2026-01-09 00:20:22.415801 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-09 00:20:22.415812 | orchestrator | Friday 09 January 2026 00:20:17 +0000 (0:00:01.187) 0:00:05.567 ******** 2026-01-09 00:20:22.415822 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:20:22.415864 | orchestrator | 2026-01-09 00:20:22.415876 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-09 00:20:22.415887 | orchestrator | Friday 09 January 2026 00:20:17 +0000 (0:00:00.065) 0:00:05.632 ******** 2026-01-09 00:20:22.415898 | orchestrator | ok: [testbed-manager] 2026-01-09 00:20:22.415910 | orchestrator | 2026-01-09 00:20:22.415923 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-09 00:20:22.415935 | orchestrator | Friday 09 January 2026 00:20:18 +0000 (0:00:00.503) 0:00:06.135 ******** 2026-01-09 00:20:22.415948 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:20:22.415960 | orchestrator | 2026-01-09 00:20:22.415973 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-09 00:20:22.415986 | orchestrator | Friday 09 January 2026 00:20:18 +0000 (0:00:00.085) 0:00:06.221 ******** 2026-01-09 00:20:22.415999 | orchestrator | changed: [testbed-manager] 2026-01-09 00:20:22.416011 | orchestrator | 2026-01-09 00:20:22.416023 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-09 00:20:22.416036 | orchestrator | Friday 09 January 2026 00:20:18 +0000 (0:00:00.572) 0:00:06.793 ******** 2026-01-09 00:20:22.416047 | orchestrator | changed: [testbed-manager] 2026-01-09 00:20:22.416057 | orchestrator | 2026-01-09 00:20:22.416068 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-09 00:20:22.416079 | orchestrator | Friday 09 January 2026 00:20:19 +0000 (0:00:01.111) 0:00:07.905 ******** 2026-01-09 00:20:22.416110 | orchestrator | ok: [testbed-manager] 2026-01-09 00:20:22.416122 | orchestrator | 2026-01-09 00:20:22.416141 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-09 00:20:22.416152 | orchestrator | Friday 09 January 2026 00:20:20 +0000 (0:00:01.047) 0:00:08.953 ******** 2026-01-09 00:20:22.416163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-09 00:20:22.416174 | orchestrator | 2026-01-09 00:20:22.416185 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-09 00:20:22.416196 | orchestrator | Friday 09 January 2026 00:20:20 +0000 (0:00:00.092) 0:00:09.045 ******** 2026-01-09 00:20:22.416208 | orchestrator | changed: [testbed-manager] 2026-01-09 00:20:22.416218 | orchestrator | 2026-01-09 00:20:22.416229 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:20:22.416241 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-09 00:20:22.416252 | orchestrator | 2026-01-09 00:20:22.416263 | orchestrator | 2026-01-09 00:20:22.416274 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:20:22.416284 | orchestrator | Friday 09 January 2026 00:20:22 +0000 (0:00:01.189) 0:00:10.235 ******** 2026-01-09 00:20:22.416295 | orchestrator | =============================================================================== 2026-01-09 00:20:22.416306 | orchestrator | Gathering Facts --------------------------------------------------------- 3.97s 2026-01-09 00:20:22.416317 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2026-01-09 00:20:22.416327 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.19s 2026-01-09 00:20:22.416338 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2026-01-09 00:20:22.416349 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.05s 2026-01-09 00:20:22.416359 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-01-09 00:20:22.416387 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2026-01-09 00:20:22.416399 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.11s 2026-01-09 00:20:22.416410 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-01-09 00:20:22.416420 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-01-09 00:20:22.416431 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-01-09 00:20:22.416441 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-01-09 00:20:22.416452 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-01-09 00:20:22.721483 | orchestrator | + osism apply sshconfig 2026-01-09 00:20:34.876214 | orchestrator | 2026-01-09 00:20:34 | INFO  | Task 07feab90-740e-4b28-af84-c58daf433743 (sshconfig) was prepared for execution. 2026-01-09 00:20:34.876363 | orchestrator | 2026-01-09 00:20:34 | INFO  | It takes a moment until task 07feab90-740e-4b28-af84-c58daf433743 (sshconfig) has been started and output is visible here. 2026-01-09 00:20:47.061398 | orchestrator | 2026-01-09 00:20:47.061507 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-09 00:20:47.061520 | orchestrator | 2026-01-09 00:20:47.061529 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-09 00:20:47.061538 | orchestrator | Friday 09 January 2026 00:20:39 +0000 (0:00:00.166) 0:00:00.166 ******** 2026-01-09 00:20:47.061547 | orchestrator | ok: [testbed-manager] 2026-01-09 00:20:47.061556 | orchestrator | 2026-01-09 00:20:47.061564 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-09 00:20:47.061572 | orchestrator | Friday 09 January 2026 00:20:39 +0000 (0:00:00.575) 0:00:00.742 ******** 2026-01-09 00:20:47.061603 | orchestrator | changed: [testbed-manager] 2026-01-09 00:20:47.061613 | orchestrator | 2026-01-09 00:20:47.061621 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-09 00:20:47.061629 | orchestrator | Friday 09 January 2026 00:20:40 +0000 (0:00:00.534) 0:00:01.276 ******** 2026-01-09 00:20:47.061637 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-09 00:20:47.061645 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-09 00:20:47.061653 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-09 00:20:47.061661 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-09 00:20:47.061669 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-09 00:20:47.061677 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-09 00:20:47.061685 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-09 00:20:47.061693 | orchestrator | 2026-01-09 00:20:47.061701 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-09 00:20:47.061709 | orchestrator | Friday 09 January 2026 00:20:46 +0000 (0:00:05.943) 0:00:07.220 ******** 2026-01-09 00:20:47.061717 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:20:47.061725 | orchestrator | 2026-01-09 00:20:47.061733 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-09 00:20:47.061740 | orchestrator | Friday 09 January 2026 00:20:46 +0000 (0:00:00.075) 0:00:07.295 ******** 2026-01-09 00:20:47.061749 | orchestrator | changed: [testbed-manager] 2026-01-09 00:20:47.061757 | orchestrator | 2026-01-09 00:20:47.061764 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:20:47.061774 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:20:47.061783 | orchestrator | 2026-01-09 00:20:47.061791 | orchestrator | 2026-01-09 00:20:47.061799 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:20:47.061807 | orchestrator | Friday 09 January 2026 00:20:46 +0000 (0:00:00.620) 0:00:07.916 ******** 2026-01-09 00:20:47.061815 | orchestrator | =============================================================================== 2026-01-09 00:20:47.061875 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.94s 2026-01-09 00:20:47.061884 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.62s 2026-01-09 00:20:47.061892 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2026-01-09 00:20:47.061900 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-01-09 00:20:47.061908 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-01-09 00:20:47.384413 | orchestrator | + osism apply known-hosts 2026-01-09 00:20:59.585725 | orchestrator | 2026-01-09 00:20:59 | INFO  | Task 73f61260-ff56-4887-aeec-3931dac32fef (known-hosts) was prepared for execution. 2026-01-09 00:20:59.585905 | orchestrator | 2026-01-09 00:20:59 | INFO  | It takes a moment until task 73f61260-ff56-4887-aeec-3931dac32fef (known-hosts) has been started and output is visible here. 2026-01-09 00:21:17.025461 | orchestrator | 2026-01-09 00:21:17.025586 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-09 00:21:17.025604 | orchestrator | 2026-01-09 00:21:17.025616 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-09 00:21:17.025630 | orchestrator | Friday 09 January 2026 00:21:03 +0000 (0:00:00.167) 0:00:00.167 ******** 2026-01-09 00:21:17.025642 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-09 00:21:17.025653 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-09 00:21:17.025665 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-09 00:21:17.025676 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-09 00:21:17.025713 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-09 00:21:17.025724 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-09 00:21:17.025735 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-09 00:21:17.025746 | orchestrator | 2026-01-09 00:21:17.025757 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-09 00:21:17.025769 | orchestrator | Friday 09 January 2026 00:21:10 +0000 (0:00:06.212) 0:00:06.379 ******** 2026-01-09 00:21:17.025783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-09 00:21:17.025796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-09 00:21:17.025867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-09 00:21:17.025881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-09 00:21:17.025892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-09 00:21:17.025903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-09 00:21:17.025913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-09 00:21:17.025924 | orchestrator | 2026-01-09 00:21:17.025935 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:17.025946 | orchestrator | Friday 09 January 2026 00:21:10 +0000 (0:00:00.180) 0:00:06.560 ******** 2026-01-09 00:21:17.025959 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGX2/TLopm23zE6iL+JW0MPQVMhTLM8uWbJ9nbcQ+fH) 2026-01-09 00:21:17.025975 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoPd3nPHsfW5QqJBKXdNd0pLaUd7pLSoZSi2OBcQNXvMZx3IZ2T662+YfIdmYgbm4w7jXYh7MLDg2gPvd7DqhBu74aK3lDhxedPr8apUvVjsfFAjS/r9swZGRkQiiJJFa4krZ5a8D6ffsmuJvUz7ioZ92XUbhbB/gd3a5iX3ivOpreX5fmRWAJm7kzYX3wqiimyNZhm9qtYV3YJblZnvLO/8EmlqNDUxLV09yc/Ky61DxQQrx9AtzQE1YT3NCow6HW7TfMvnuewGminqcx650yYAXNyuDUm8p485n/IixxOn5jjPp4IimOqPIUj9e0PTNBDLOoOUZP1qwuSWO1JvmYUvAuN6877Y3b1A+sHpTWAChYi40YLl/bA9kw83aVRm6vU9537R2o1Mjep4j5wweyUJhPCeJtXfkA6p72v9R320mULqMXaYzHW4OKvPj+XHd3cLqwuZwS77Pkw8n1QlgaPF79IYW4MdhR427ecqlP+dsd8pffy0RRTe+Iw/7kb7k=) 2026-01-09 00:21:17.025997 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPTFjYwTOlfY5/Uh6R47fSTHbKC3DNgvGLXF6OouR0SPHxOyvsX1/D4DyfLjq53XC2L4DeqsPaIkTOTCKeF4nqw=) 2026-01-09 00:21:17.026105 | orchestrator | 2026-01-09 00:21:17.026121 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:17.026134 | orchestrator | Friday 09 January 2026 00:21:11 +0000 (0:00:01.240) 0:00:07.800 ******** 2026-01-09 00:21:17.026147 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKhzRSY2poTf/LpPspeylzKrwnJHNBovCPoC11NkFohuumVpFLam0zbIlwOWxstinIZksMYHrU/6/NHFjEHgCw=) 2026-01-09 00:21:17.026191 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwQcRfl5eHnVal8tWXshJ7EzDSSQwVzu5PGRTrQrhKVF+wZD/914aKly496JqKoAQPifE2dhOK6JDzqE6Kp2iMVY/V4kSRqtrnpwQxAAbQGMDkiZP1NCuLh5nN/yGwLL/8Oj/k6cekRUe5NR3p5Rk2TrUQ8VngIJ3Eaf96XAhxiRCcWa2Jd4gN6WJPxeXyM9XA5CL62LKOVBxCyrWD7wTDlNUBbtkovqIpLnNvvwQal27R10GphsaC+yddt5R81LvJZNI7E1HwD15eH8hCDsM50XxdZOhqUcSZIzUgtVKPFkFsvE55xl/u0lEHYIX5bClSFjqk5vENAm7lgyBR/agn3x191+moum/DiIiZh72852j4IZtBJzHItTbZwbQ4iME4LJk0twtm7nAYgl+3UZ2fGFU/jptEQ3uB2ovuTnsIKBdwl8EZ7Bbaou97RZTWlnV6BOlqNGlWdsGcEkHYXlXF0zQ8IET7QKKDQPG/4Vc54wFPhswplmizY4LNympvHAU=) 2026-01-09 00:21:17.026217 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIJmHgvOqNtJROMMJn+JHm1cuofqI9Xx0BiLbwWcHihG) 2026-01-09 00:21:17.026230 | orchestrator | 2026-01-09 00:21:17.026243 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:17.026256 | orchestrator | Friday 09 January 2026 00:21:12 +0000 (0:00:01.104) 0:00:08.905 ******** 2026-01-09 00:21:17.026269 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE50LOJCDnw+maG+teJ83yewhyL0YF0Yy87foTj0AGgLB7LXdPtifaRbNNjdmnyGPIw3deZVvl5wiNObC3bWjkE=) 2026-01-09 00:21:17.026281 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOM0suRm1Ub7rpYT4QB4cSrDa8klfrRXVzd+dHiyaiu4) 2026-01-09 00:21:17.026295 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaCysRO6kVaXH1GFlrLUB7+IQgoQZ3RRhzltY11ZZx8gCHrIJTcb6xlKCmNc8srm8UcNyUyuEpt97C5nP1GWEJG/o0lYakSxAXk+0TupaKfhev38sosnr7RKHi62kRjXOGJVb5n/Jg1buFzfPDDJt1OwL4TW1A6EcqjI/33js5SvLMhjsKsiJqkUzYm1ftJvQvdEUOdZZgGJ3iwEf+83ccDQfEvxXLi6S3DwBIc9w1WjXXbt64W4GV06Z5I0YAbRSIzv5u64rc5JxpvelFjuyaLNLjD2G2gCDDgxKI2Bo/htqS+0JOMZZLtvtIMOXfWWqzDHpNZXsvPT5U/IkNp7RQidInZIn96xNVzjXsI390P8YgkutSI8hbvr6o8SnfZ2n3JCQPmzIHJacu//xfQi9nDE56UvlSjp1VbxkPYu8pQOCGkLKl2cP3uyVi4sV9aiwLxF/0zpeQlDnov88nx0UZdslTZ3dnTbuNztsjmgezZz7L9UyLgPiKAuZaHYvEkW8=) 2026-01-09 00:21:17.026308 | orchestrator | 2026-01-09 00:21:17.026320 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:17.026331 | orchestrator | Friday 09 January 2026 00:21:13 +0000 (0:00:01.107) 0:00:10.012 ******** 2026-01-09 00:21:17.026415 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+6ANEAiQD+79aXHcotHGOHYecYjUm6kcCCoTn4C9I/sXNQ4m4am8ys8YBDefR2Qpaqf5fNhUoM9RKkEuwl3ne2SDsigsZeNJxDlwAiq0A30PgU0n7eG5hhTd77Y2uY/0UE062FoKHqC2v69VotGJ5qxAHMMgdTUu26Ah68KA6Zf0gm3fePTagA8EBbXNvsFUf+dQ8f9goB4KLv2Vwdn+yR8ztGHyt3FpR8doouqqUCkv166B9ZE/vQv2+Py0y5eqBHl529iNncHtBb7rALcxjhIqOoksfOyL/aIP+glW3eVzgB5whL5LfiXZJXU1UDOv5ilmsDAnwxtPZmFjhQztkQ28yNHcFoWxdQ7s5G+Xe3aT1jnIBlbyQGxWc/oGfhb7V7qvqZn3u7WF3AMrF8nyotyZza87myDGEo1eRBTQrhTNKjAjl+aoXoGt3Y3SK2VKJDNv+7YP+49YDa9qSoGUp2Iz9vaFEuL9tRE2fhZPJoLYoLdPg8/0F8/ZofrsDVAk=) 2026-01-09 00:21:17.026428 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBALQZzLQOl/7sA6MmJv6nC6bpIwJncdYcQw48fTSz9/ChIWiufPHLCD4jXQE1pHOXkA/G0gaMfndCFqLjbuUc/w=) 2026-01-09 00:21:17.026438 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFRgCUeVZj7awsNpOtFrAickyvLttb2stJVxOy8m+Ath) 2026-01-09 00:21:17.026449 | orchestrator | 2026-01-09 00:21:17.026460 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:17.026471 | orchestrator | Friday 09 January 2026 00:21:14 +0000 (0:00:01.094) 0:00:11.107 ******** 2026-01-09 00:21:17.026482 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDzh55E8Gyp9wPzxEL7FtFY9UU+OrhxlF5z7ea8M14J+79bW8t2wz5c2TLnfX+pV2E9Gx//DHFa41cu4UfemYFQ=) 2026-01-09 00:21:17.026494 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDQGLu7EiKgtXn3NdZEL/sZwiuNo3iGkiHkvph3dIGqsxbctSd1rGgw20RRzt91CZ0F9nE/gbTD7a03o9hzl6A7q0VQVIBMTU00LaDAAJKqHhTblHyl58orC1zg/WdbXTKp9xPbMWMamHpiKoVXoljhgiOsuibE+Ln/kTiiCFkKCmXmL1IjNhspg98XWbWxv4xv262T1nghzu1y5c9SW6WVBCKqATm/F1BggN9hZU7AnRzuLaOrpNmNsaUwqKoo9dYc00lYkK58XhO1bKN5g0fH8rX+cNUHX0dVwVpaIqaBtAXTS/Ul8jXtSF29UrwJkhBF9t+pG4hyGrd3Tvs/Zn9a9lyIgHtdQu5gJHW9AMOiFhi7BnKHABAP2fiPL/gklE/l6gX4D8NBYO3ouL2VU1I8VqgogPBE2JZIDZsnKaH8a244GpshFnmXXIa+gcr1mj5X0ZFfm+anQCfVUj+U0iaSkjJhM3ReeT31u4Yi6PTWadRYbtp56anxkODdSSBPzM=) 2026-01-09 00:21:17.026512 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILZBZyN4MsHpHl0PANHqOCa4/lXPEbWva6nd++tWspoU) 2026-01-09 00:21:17.026523 | orchestrator | 2026-01-09 00:21:17.026534 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:17.026544 | orchestrator | Friday 09 January 2026 00:21:15 +0000 (0:00:01.130) 0:00:12.237 ******** 2026-01-09 00:21:17.026565 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmFrFDWBMqjWcfXeYav9ywmRp+R0T2LORbRpCrrFd86PeGqO66QosBXtVbJbl0en341HEld4myZphNpxeF62Omh+dpjsFr4XWWP41+bebqzLWEENmL90BxF1600xabMXaTtLJ+7eLjsDy48i0Npo927T7CC7c3j4GrweSOMsf6a6mM9DoZSySQbSC3rvIrWDMzHRE3UWB1lksqfyIlpGokkHgMG6mLySCjLMSsd/BQY/xIO3/f+DGaRXCeR/TNBqcdmGbAD81GIm26nXp9WwFV1vTPuRP8pdSnMI3WWXAeUb1ohQpd4NCXeVvLcJ3FRPqCClh+TGQvxAmvVZzke+1b/qgB0NBWfgVpzuE/92r0Maghm7mearXpGNEBdGcxY1MrxqihN/PfLJVnvsIwJyun/xyR9q9C+PP/HuoDAPcCa3ro4yH/AQ4RzyPFPELce+dU9ACaIcHqMkcqqsBednAcUiBttidVrd4pmb21nQB/gOrEP7yxjbjJOxxu6aORwJk=) 2026-01-09 00:21:28.060315 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIJMoXrsxaLRFX+whfZzb8B/Mn3hibqXI/POwHBRZSyHHAT7UqWUlX9Oo9AYvjwJIkViU/vfuxQyNOtCp2MngWU=) 2026-01-09 00:21:28.060411 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFljzfzGX9cNFMDCFCf8DASw7Xrcc+kw0nleQGbgPxNJ) 2026-01-09 00:21:28.060422 | orchestrator | 2026-01-09 00:21:28.060431 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:28.060439 | orchestrator | Friday 09 January 2026 00:21:17 +0000 (0:00:01.110) 0:00:13.348 ******** 2026-01-09 00:21:28.060445 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLr4ratV3YQAYIFWnk8EPBj4z73piCXHbvzwZKg1Txkw3EZ5RufXcpAeszDCq0eII0zCwCtjpT+tErsru9XJa0E=) 2026-01-09 00:21:28.060454 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDE+HHUGPQWU034ITu04j3gQb9ZKS7jLpSkzKtLpdRAu/q06TPseYggT5TAuZqWth9ehtMTec3XBWrLzZK7nUfR2go1NyBzRh6HZVZaTRxG31Szb1l/xT/pGG3sMvyrhgCWPRIdcfuuznj18mJ6zV6fBcURGW6FB+AuM/AIRDcXu/joMC+kM63zFBraifnxBD57QRjxACSy2rQtAkMRoeggvwVjie6WSoxKnMprvQcuvfPUAeUtCrzOsYNVPxpOUIQb4N1Zpah/5CImj9fazVqJaIGsHX+ZHM3iRJz0IVdJncnfhACTBw5SxybFIEKnn7jYlEZEkxWf/f2QB6Zcyzfh2Dexzny0ffXCTwn8mDnmz1+U+yMNSf82DvVQpEyKM7wUg7WB/lgpYv8pRF7YBsyClXBIK4NMdEIF9bRisTpZ5I0g0M8sYp+iPY28QRGPAD6xJ85c1dSS8uFmY2naomLFa5ywiNRx9fd/uKD00CaqEw68S0ACBaD0q33ToqqEHzs=) 2026-01-09 00:21:28.060462 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN2FtfDkvNf6E55Qlae6bLo2t0jcP/HA4BoXJImWN/Rf) 2026-01-09 00:21:28.060469 | orchestrator | 2026-01-09 00:21:28.060475 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-09 00:21:28.060483 | orchestrator | Friday 09 January 2026 00:21:18 +0000 (0:00:01.129) 0:00:14.477 ******** 2026-01-09 00:21:28.060490 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-09 00:21:28.060497 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-09 00:21:28.060503 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-09 00:21:28.060510 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-09 00:21:28.060516 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-09 00:21:28.060522 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-09 00:21:28.060528 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-09 00:21:28.060534 | orchestrator | 2026-01-09 00:21:28.060541 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-09 00:21:28.060562 | orchestrator | Friday 09 January 2026 00:21:23 +0000 (0:00:05.378) 0:00:19.856 ******** 2026-01-09 00:21:28.060570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-09 00:21:28.060579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-09 00:21:28.060585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-09 00:21:28.060598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-09 00:21:28.060604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-09 00:21:28.060611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-09 00:21:28.060617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-09 00:21:28.060623 | orchestrator | 2026-01-09 00:21:28.060629 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:28.060635 | orchestrator | Friday 09 January 2026 00:21:23 +0000 (0:00:00.177) 0:00:20.034 ******** 2026-01-09 00:21:28.060642 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGX2/TLopm23zE6iL+JW0MPQVMhTLM8uWbJ9nbcQ+fH) 2026-01-09 00:21:28.060665 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoPd3nPHsfW5QqJBKXdNd0pLaUd7pLSoZSi2OBcQNXvMZx3IZ2T662+YfIdmYgbm4w7jXYh7MLDg2gPvd7DqhBu74aK3lDhxedPr8apUvVjsfFAjS/r9swZGRkQiiJJFa4krZ5a8D6ffsmuJvUz7ioZ92XUbhbB/gd3a5iX3ivOpreX5fmRWAJm7kzYX3wqiimyNZhm9qtYV3YJblZnvLO/8EmlqNDUxLV09yc/Ky61DxQQrx9AtzQE1YT3NCow6HW7TfMvnuewGminqcx650yYAXNyuDUm8p485n/IixxOn5jjPp4IimOqPIUj9e0PTNBDLOoOUZP1qwuSWO1JvmYUvAuN6877Y3b1A+sHpTWAChYi40YLl/bA9kw83aVRm6vU9537R2o1Mjep4j5wweyUJhPCeJtXfkA6p72v9R320mULqMXaYzHW4OKvPj+XHd3cLqwuZwS77Pkw8n1QlgaPF79IYW4MdhR427ecqlP+dsd8pffy0RRTe+Iw/7kb7k=) 2026-01-09 00:21:28.060672 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPTFjYwTOlfY5/Uh6R47fSTHbKC3DNgvGLXF6OouR0SPHxOyvsX1/D4DyfLjq53XC2L4DeqsPaIkTOTCKeF4nqw=) 2026-01-09 00:21:28.060679 | orchestrator | 2026-01-09 00:21:28.060685 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:28.060691 | orchestrator | Friday 09 January 2026 00:21:24 +0000 (0:00:01.095) 0:00:21.129 ******** 2026-01-09 00:21:28.060698 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIJmHgvOqNtJROMMJn+JHm1cuofqI9Xx0BiLbwWcHihG) 2026-01-09 00:21:28.060704 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwQcRfl5eHnVal8tWXshJ7EzDSSQwVzu5PGRTrQrhKVF+wZD/914aKly496JqKoAQPifE2dhOK6JDzqE6Kp2iMVY/V4kSRqtrnpwQxAAbQGMDkiZP1NCuLh5nN/yGwLL/8Oj/k6cekRUe5NR3p5Rk2TrUQ8VngIJ3Eaf96XAhxiRCcWa2Jd4gN6WJPxeXyM9XA5CL62LKOVBxCyrWD7wTDlNUBbtkovqIpLnNvvwQal27R10GphsaC+yddt5R81LvJZNI7E1HwD15eH8hCDsM50XxdZOhqUcSZIzUgtVKPFkFsvE55xl/u0lEHYIX5bClSFjqk5vENAm7lgyBR/agn3x191+moum/DiIiZh72852j4IZtBJzHItTbZwbQ4iME4LJk0twtm7nAYgl+3UZ2fGFU/jptEQ3uB2ovuTnsIKBdwl8EZ7Bbaou97RZTWlnV6BOlqNGlWdsGcEkHYXlXF0zQ8IET7QKKDQPG/4Vc54wFPhswplmizY4LNympvHAU=) 2026-01-09 00:21:28.060715 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKhzRSY2poTf/LpPspeylzKrwnJHNBovCPoC11NkFohuumVpFLam0zbIlwOWxstinIZksMYHrU/6/NHFjEHgCw=) 2026-01-09 00:21:28.060721 | orchestrator | 2026-01-09 00:21:28.060727 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:28.060734 | orchestrator | Friday 09 January 2026 00:21:25 +0000 (0:00:01.068) 0:00:22.197 ******** 2026-01-09 00:21:28.060741 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaCysRO6kVaXH1GFlrLUB7+IQgoQZ3RRhzltY11ZZx8gCHrIJTcb6xlKCmNc8srm8UcNyUyuEpt97C5nP1GWEJG/o0lYakSxAXk+0TupaKfhev38sosnr7RKHi62kRjXOGJVb5n/Jg1buFzfPDDJt1OwL4TW1A6EcqjI/33js5SvLMhjsKsiJqkUzYm1ftJvQvdEUOdZZgGJ3iwEf+83ccDQfEvxXLi6S3DwBIc9w1WjXXbt64W4GV06Z5I0YAbRSIzv5u64rc5JxpvelFjuyaLNLjD2G2gCDDgxKI2Bo/htqS+0JOMZZLtvtIMOXfWWqzDHpNZXsvPT5U/IkNp7RQidInZIn96xNVzjXsI390P8YgkutSI8hbvr6o8SnfZ2n3JCQPmzIHJacu//xfQi9nDE56UvlSjp1VbxkPYu8pQOCGkLKl2cP3uyVi4sV9aiwLxF/0zpeQlDnov88nx0UZdslTZ3dnTbuNztsjmgezZz7L9UyLgPiKAuZaHYvEkW8=) 2026-01-09 00:21:28.060747 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE50LOJCDnw+maG+teJ83yewhyL0YF0Yy87foTj0AGgLB7LXdPtifaRbNNjdmnyGPIw3deZVvl5wiNObC3bWjkE=) 2026-01-09 00:21:28.060754 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOM0suRm1Ub7rpYT4QB4cSrDa8klfrRXVzd+dHiyaiu4) 2026-01-09 00:21:28.060760 | orchestrator | 2026-01-09 00:21:28.060766 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:28.060773 | orchestrator | Friday 09 January 2026 00:21:26 +0000 (0:00:01.085) 0:00:23.283 ******** 2026-01-09 00:21:28.060779 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+6ANEAiQD+79aXHcotHGOHYecYjUm6kcCCoTn4C9I/sXNQ4m4am8ys8YBDefR2Qpaqf5fNhUoM9RKkEuwl3ne2SDsigsZeNJxDlwAiq0A30PgU0n7eG5hhTd77Y2uY/0UE062FoKHqC2v69VotGJ5qxAHMMgdTUu26Ah68KA6Zf0gm3fePTagA8EBbXNvsFUf+dQ8f9goB4KLv2Vwdn+yR8ztGHyt3FpR8doouqqUCkv166B9ZE/vQv2+Py0y5eqBHl529iNncHtBb7rALcxjhIqOoksfOyL/aIP+glW3eVzgB5whL5LfiXZJXU1UDOv5ilmsDAnwxtPZmFjhQztkQ28yNHcFoWxdQ7s5G+Xe3aT1jnIBlbyQGxWc/oGfhb7V7qvqZn3u7WF3AMrF8nyotyZza87myDGEo1eRBTQrhTNKjAjl+aoXoGt3Y3SK2VKJDNv+7YP+49YDa9qSoGUp2Iz9vaFEuL9tRE2fhZPJoLYoLdPg8/0F8/ZofrsDVAk=) 2026-01-09 00:21:28.060785 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBALQZzLQOl/7sA6MmJv6nC6bpIwJncdYcQw48fTSz9/ChIWiufPHLCD4jXQE1pHOXkA/G0gaMfndCFqLjbuUc/w=) 2026-01-09 00:21:28.060821 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFRgCUeVZj7awsNpOtFrAickyvLttb2stJVxOy8m+Ath) 2026-01-09 00:21:32.735410 | orchestrator | 2026-01-09 00:21:32.735509 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:32.735520 | orchestrator | Friday 09 January 2026 00:21:28 +0000 (0:00:01.097) 0:00:24.380 ******** 2026-01-09 00:21:32.735529 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILZBZyN4MsHpHl0PANHqOCa4/lXPEbWva6nd++tWspoU) 2026-01-09 00:21:32.735558 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDQGLu7EiKgtXn3NdZEL/sZwiuNo3iGkiHkvph3dIGqsxbctSd1rGgw20RRzt91CZ0F9nE/gbTD7a03o9hzl6A7q0VQVIBMTU00LaDAAJKqHhTblHyl58orC1zg/WdbXTKp9xPbMWMamHpiKoVXoljhgiOsuibE+Ln/kTiiCFkKCmXmL1IjNhspg98XWbWxv4xv262T1nghzu1y5c9SW6WVBCKqATm/F1BggN9hZU7AnRzuLaOrpNmNsaUwqKoo9dYc00lYkK58XhO1bKN5g0fH8rX+cNUHX0dVwVpaIqaBtAXTS/Ul8jXtSF29UrwJkhBF9t+pG4hyGrd3Tvs/Zn9a9lyIgHtdQu5gJHW9AMOiFhi7BnKHABAP2fiPL/gklE/l6gX4D8NBYO3ouL2VU1I8VqgogPBE2JZIDZsnKaH8a244GpshFnmXXIa+gcr1mj5X0ZFfm+anQCfVUj+U0iaSkjJhM3ReeT31u4Yi6PTWadRYbtp56anxkODdSSBPzM=) 2026-01-09 00:21:32.735572 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDzh55E8Gyp9wPzxEL7FtFY9UU+OrhxlF5z7ea8M14J+79bW8t2wz5c2TLnfX+pV2E9Gx//DHFa41cu4UfemYFQ=) 2026-01-09 00:21:32.735603 | orchestrator | 2026-01-09 00:21:32.735614 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:32.735622 | orchestrator | Friday 09 January 2026 00:21:29 +0000 (0:00:01.127) 0:00:25.508 ******** 2026-01-09 00:21:32.735629 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmFrFDWBMqjWcfXeYav9ywmRp+R0T2LORbRpCrrFd86PeGqO66QosBXtVbJbl0en341HEld4myZphNpxeF62Omh+dpjsFr4XWWP41+bebqzLWEENmL90BxF1600xabMXaTtLJ+7eLjsDy48i0Npo927T7CC7c3j4GrweSOMsf6a6mM9DoZSySQbSC3rvIrWDMzHRE3UWB1lksqfyIlpGokkHgMG6mLySCjLMSsd/BQY/xIO3/f+DGaRXCeR/TNBqcdmGbAD81GIm26nXp9WwFV1vTPuRP8pdSnMI3WWXAeUb1ohQpd4NCXeVvLcJ3FRPqCClh+TGQvxAmvVZzke+1b/qgB0NBWfgVpzuE/92r0Maghm7mearXpGNEBdGcxY1MrxqihN/PfLJVnvsIwJyun/xyR9q9C+PP/HuoDAPcCa3ro4yH/AQ4RzyPFPELce+dU9ACaIcHqMkcqqsBednAcUiBttidVrd4pmb21nQB/gOrEP7yxjbjJOxxu6aORwJk=) 2026-01-09 00:21:32.735637 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIJMoXrsxaLRFX+whfZzb8B/Mn3hibqXI/POwHBRZSyHHAT7UqWUlX9Oo9AYvjwJIkViU/vfuxQyNOtCp2MngWU=) 2026-01-09 00:21:32.735645 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFljzfzGX9cNFMDCFCf8DASw7Xrcc+kw0nleQGbgPxNJ) 2026-01-09 00:21:32.735652 | orchestrator | 2026-01-09 00:21:32.735659 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-09 00:21:32.735666 | orchestrator | Friday 09 January 2026 00:21:30 +0000 (0:00:01.094) 0:00:26.602 ******** 2026-01-09 00:21:32.735674 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLr4ratV3YQAYIFWnk8EPBj4z73piCXHbvzwZKg1Txkw3EZ5RufXcpAeszDCq0eII0zCwCtjpT+tErsru9XJa0E=) 2026-01-09 00:21:32.735681 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDE+HHUGPQWU034ITu04j3gQb9ZKS7jLpSkzKtLpdRAu/q06TPseYggT5TAuZqWth9ehtMTec3XBWrLzZK7nUfR2go1NyBzRh6HZVZaTRxG31Szb1l/xT/pGG3sMvyrhgCWPRIdcfuuznj18mJ6zV6fBcURGW6FB+AuM/AIRDcXu/joMC+kM63zFBraifnxBD57QRjxACSy2rQtAkMRoeggvwVjie6WSoxKnMprvQcuvfPUAeUtCrzOsYNVPxpOUIQb4N1Zpah/5CImj9fazVqJaIGsHX+ZHM3iRJz0IVdJncnfhACTBw5SxybFIEKnn7jYlEZEkxWf/f2QB6Zcyzfh2Dexzny0ffXCTwn8mDnmz1+U+yMNSf82DvVQpEyKM7wUg7WB/lgpYv8pRF7YBsyClXBIK4NMdEIF9bRisTpZ5I0g0M8sYp+iPY28QRGPAD6xJ85c1dSS8uFmY2naomLFa5ywiNRx9fd/uKD00CaqEw68S0ACBaD0q33ToqqEHzs=) 2026-01-09 00:21:32.735689 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN2FtfDkvNf6E55Qlae6bLo2t0jcP/HA4BoXJImWN/Rf) 2026-01-09 00:21:32.735696 | orchestrator | 2026-01-09 00:21:32.735703 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-09 00:21:32.735711 | orchestrator | Friday 09 January 2026 00:21:31 +0000 (0:00:01.155) 0:00:27.757 ******** 2026-01-09 00:21:32.735718 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-09 00:21:32.735726 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-09 00:21:32.735733 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-09 00:21:32.735741 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-09 00:21:32.735748 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-09 00:21:32.735755 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-09 00:21:32.735762 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-09 00:21:32.735769 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:21:32.735777 | orchestrator | 2026-01-09 00:21:32.735848 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-09 00:21:32.735858 | orchestrator | Friday 09 January 2026 00:21:31 +0000 (0:00:00.159) 0:00:27.917 ******** 2026-01-09 00:21:32.735865 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:21:32.735872 | orchestrator | 2026-01-09 00:21:32.735887 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-09 00:21:32.735894 | orchestrator | Friday 09 January 2026 00:21:31 +0000 (0:00:00.065) 0:00:27.983 ******** 2026-01-09 00:21:32.735901 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:21:32.735909 | orchestrator | 2026-01-09 00:21:32.735916 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-09 00:21:32.735923 | orchestrator | Friday 09 January 2026 00:21:31 +0000 (0:00:00.062) 0:00:28.045 ******** 2026-01-09 00:21:32.735930 | orchestrator | changed: [testbed-manager] 2026-01-09 00:21:32.735937 | orchestrator | 2026-01-09 00:21:32.735945 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:21:32.735953 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-09 00:21:32.735962 | orchestrator | 2026-01-09 00:21:32.735969 | orchestrator | 2026-01-09 00:21:32.735977 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:21:32.735984 | orchestrator | Friday 09 January 2026 00:21:32 +0000 (0:00:00.790) 0:00:28.835 ******** 2026-01-09 00:21:32.735991 | orchestrator | =============================================================================== 2026-01-09 00:21:32.735998 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.21s 2026-01-09 00:21:32.736005 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.38s 2026-01-09 00:21:32.736015 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-01-09 00:21:32.736022 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-01-09 00:21:32.736029 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-09 00:21:32.736036 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-09 00:21:32.736043 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-09 00:21:32.736050 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-09 00:21:32.736057 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-09 00:21:32.736065 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-09 00:21:32.736072 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-09 00:21:32.736079 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-09 00:21:32.736086 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-09 00:21:32.736093 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-09 00:21:32.736101 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-09 00:21:32.736108 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-09 00:21:32.736115 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.79s 2026-01-09 00:21:32.736122 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-01-09 00:21:32.736130 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-01-09 00:21:32.736137 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-01-09 00:21:33.065411 | orchestrator | + osism apply squid 2026-01-09 00:21:45.221512 | orchestrator | 2026-01-09 00:21:45 | INFO  | Task 4e65386e-b652-4449-a71d-34fc59bccabd (squid) was prepared for execution. 2026-01-09 00:21:45.221649 | orchestrator | 2026-01-09 00:21:45 | INFO  | It takes a moment until task 4e65386e-b652-4449-a71d-34fc59bccabd (squid) has been started and output is visible here. 2026-01-09 00:23:56.434634 | orchestrator | 2026-01-09 00:23:56.434803 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-09 00:23:56.434852 | orchestrator | 2026-01-09 00:23:56.434867 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-09 00:23:56.434878 | orchestrator | Friday 09 January 2026 00:21:49 +0000 (0:00:00.167) 0:00:00.167 ******** 2026-01-09 00:23:56.434908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-09 00:23:56.434920 | orchestrator | 2026-01-09 00:23:56.434931 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-09 00:23:56.434942 | orchestrator | Friday 09 January 2026 00:21:49 +0000 (0:00:00.084) 0:00:00.251 ******** 2026-01-09 00:23:56.434953 | orchestrator | ok: [testbed-manager] 2026-01-09 00:23:56.434965 | orchestrator | 2026-01-09 00:23:56.434976 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-09 00:23:56.434987 | orchestrator | Friday 09 January 2026 00:21:51 +0000 (0:00:01.554) 0:00:01.806 ******** 2026-01-09 00:23:56.434998 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-09 00:23:56.435008 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-09 00:23:56.435019 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-09 00:23:56.435030 | orchestrator | 2026-01-09 00:23:56.435040 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-09 00:23:56.435051 | orchestrator | Friday 09 January 2026 00:21:52 +0000 (0:00:01.242) 0:00:03.048 ******** 2026-01-09 00:23:56.435062 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-09 00:23:56.435072 | orchestrator | 2026-01-09 00:23:56.435083 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-09 00:23:56.435094 | orchestrator | Friday 09 January 2026 00:21:53 +0000 (0:00:01.121) 0:00:04.169 ******** 2026-01-09 00:23:56.435105 | orchestrator | ok: [testbed-manager] 2026-01-09 00:23:56.435115 | orchestrator | 2026-01-09 00:23:56.435126 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-09 00:23:56.435136 | orchestrator | Friday 09 January 2026 00:21:53 +0000 (0:00:00.379) 0:00:04.549 ******** 2026-01-09 00:23:56.435147 | orchestrator | changed: [testbed-manager] 2026-01-09 00:23:56.435160 | orchestrator | 2026-01-09 00:23:56.435172 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-09 00:23:56.435186 | orchestrator | Friday 09 January 2026 00:21:54 +0000 (0:00:01.005) 0:00:05.555 ******** 2026-01-09 00:23:56.435198 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-09 00:23:56.435211 | orchestrator | ok: [testbed-manager] 2026-01-09 00:23:56.435224 | orchestrator | 2026-01-09 00:23:56.435236 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-09 00:23:56.435249 | orchestrator | Friday 09 January 2026 00:22:31 +0000 (0:00:36.179) 0:00:41.734 ******** 2026-01-09 00:23:56.435261 | orchestrator | changed: [testbed-manager] 2026-01-09 00:23:56.435273 | orchestrator | 2026-01-09 00:23:56.435285 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-09 00:23:56.435304 | orchestrator | Friday 09 January 2026 00:22:55 +0000 (0:00:24.234) 0:01:05.969 ******** 2026-01-09 00:23:56.435318 | orchestrator | Pausing for 60 seconds 2026-01-09 00:23:56.435332 | orchestrator | changed: [testbed-manager] 2026-01-09 00:23:56.435344 | orchestrator | 2026-01-09 00:23:56.435357 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-09 00:23:56.435382 | orchestrator | Friday 09 January 2026 00:23:55 +0000 (0:01:00.103) 0:02:06.072 ******** 2026-01-09 00:23:56.435394 | orchestrator | ok: [testbed-manager] 2026-01-09 00:23:56.435406 | orchestrator | 2026-01-09 00:23:56.435418 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-09 00:23:56.435430 | orchestrator | Friday 09 January 2026 00:23:55 +0000 (0:00:00.075) 0:02:06.148 ******** 2026-01-09 00:23:56.435443 | orchestrator | changed: [testbed-manager] 2026-01-09 00:23:56.435455 | orchestrator | 2026-01-09 00:23:56.435468 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:23:56.435490 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:23:56.435504 | orchestrator | 2026-01-09 00:23:56.435516 | orchestrator | 2026-01-09 00:23:56.435530 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:23:56.435543 | orchestrator | Friday 09 January 2026 00:23:56 +0000 (0:00:00.633) 0:02:06.781 ******** 2026-01-09 00:23:56.435553 | orchestrator | =============================================================================== 2026-01-09 00:23:56.435564 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-01-09 00:23:56.435574 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 36.18s 2026-01-09 00:23:56.435585 | orchestrator | osism.services.squid : Restart squid service --------------------------- 24.23s 2026-01-09 00:23:56.435595 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.55s 2026-01-09 00:23:56.435606 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.24s 2026-01-09 00:23:56.435616 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2026-01-09 00:23:56.435627 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.01s 2026-01-09 00:23:56.435637 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-01-09 00:23:56.435648 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-01-09 00:23:56.435658 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-01-09 00:23:56.435669 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-01-09 00:23:56.778304 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-09 00:23:56.778409 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-09 00:23:56.781874 | orchestrator | + set -e 2026-01-09 00:23:56.781920 | orchestrator | + NAMESPACE=kolla 2026-01-09 00:23:56.781933 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-09 00:23:56.786448 | orchestrator | ++ semver latest 9.0.0 2026-01-09 00:23:56.845030 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-09 00:23:56.845120 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-09 00:23:56.845707 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-09 00:24:08.989460 | orchestrator | 2026-01-09 00:24:08 | INFO  | Task f01479d7-32f1-4e15-bfc7-6d439a851d44 (operator) was prepared for execution. 2026-01-09 00:24:08.989581 | orchestrator | 2026-01-09 00:24:08 | INFO  | It takes a moment until task f01479d7-32f1-4e15-bfc7-6d439a851d44 (operator) has been started and output is visible here. 2026-01-09 00:24:25.992845 | orchestrator | 2026-01-09 00:24:25.992983 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-09 00:24:25.993009 | orchestrator | 2026-01-09 00:24:25.993029 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-09 00:24:25.993048 | orchestrator | Friday 09 January 2026 00:24:13 +0000 (0:00:00.151) 0:00:00.151 ******** 2026-01-09 00:24:25.993068 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:24:25.993090 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:24:25.993109 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:24:25.993125 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:24:25.993136 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:24:25.993147 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:24:25.993158 | orchestrator | 2026-01-09 00:24:25.993173 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-09 00:24:25.993184 | orchestrator | Friday 09 January 2026 00:24:17 +0000 (0:00:03.503) 0:00:03.655 ******** 2026-01-09 00:24:25.993194 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:24:25.993205 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:24:25.993216 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:24:25.993226 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:24:25.993236 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:24:25.993276 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:24:25.993289 | orchestrator | 2026-01-09 00:24:25.993301 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-09 00:24:25.993314 | orchestrator | 2026-01-09 00:24:25.993326 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-09 00:24:25.993338 | orchestrator | Friday 09 January 2026 00:24:17 +0000 (0:00:00.825) 0:00:04.481 ******** 2026-01-09 00:24:25.993350 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:24:25.993363 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:24:25.993375 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:24:25.993386 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:24:25.993397 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:24:25.993407 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:24:25.993417 | orchestrator | 2026-01-09 00:24:25.993428 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-09 00:24:25.993439 | orchestrator | Friday 09 January 2026 00:24:18 +0000 (0:00:00.203) 0:00:04.684 ******** 2026-01-09 00:24:25.993450 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:24:25.993460 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:24:25.993471 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:24:25.993481 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:24:25.993492 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:24:25.993503 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:24:25.993513 | orchestrator | 2026-01-09 00:24:25.993524 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-09 00:24:25.993535 | orchestrator | Friday 09 January 2026 00:24:18 +0000 (0:00:00.179) 0:00:04.863 ******** 2026-01-09 00:24:25.993546 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:24:25.993558 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:24:25.993568 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:24:25.993579 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:24:25.993590 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:24:25.993600 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:24:25.993611 | orchestrator | 2026-01-09 00:24:25.993622 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-09 00:24:25.993632 | orchestrator | Friday 09 January 2026 00:24:18 +0000 (0:00:00.634) 0:00:05.498 ******** 2026-01-09 00:24:25.993646 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:24:25.993664 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:24:25.993683 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:24:25.993702 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:24:25.993747 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:24:25.993764 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:24:25.993782 | orchestrator | 2026-01-09 00:24:25.993802 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-09 00:24:25.993820 | orchestrator | Friday 09 January 2026 00:24:19 +0000 (0:00:00.839) 0:00:06.337 ******** 2026-01-09 00:24:25.993839 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-09 00:24:25.993857 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-09 00:24:25.993876 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-09 00:24:25.993894 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-09 00:24:25.993913 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-09 00:24:25.993930 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-09 00:24:25.993949 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-09 00:24:25.993967 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-09 00:24:25.993986 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-09 00:24:25.994003 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-09 00:24:25.994108 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-09 00:24:25.994142 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-09 00:24:25.994154 | orchestrator | 2026-01-09 00:24:25.994166 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-09 00:24:25.994191 | orchestrator | Friday 09 January 2026 00:24:21 +0000 (0:00:01.384) 0:00:07.722 ******** 2026-01-09 00:24:25.994202 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:24:25.994213 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:24:25.994224 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:24:25.994235 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:24:25.994245 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:24:25.994256 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:24:25.994267 | orchestrator | 2026-01-09 00:24:25.994278 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-09 00:24:25.994290 | orchestrator | Friday 09 January 2026 00:24:22 +0000 (0:00:01.280) 0:00:09.003 ******** 2026-01-09 00:24:25.994301 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-09 00:24:25.994312 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-09 00:24:25.994322 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-09 00:24:25.994333 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-09 00:24:25.994368 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-09 00:24:25.994380 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-09 00:24:25.994391 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-09 00:24:25.994402 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-09 00:24:25.994413 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-09 00:24:25.994423 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-09 00:24:25.994434 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-09 00:24:25.994445 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-09 00:24:25.994455 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-09 00:24:25.994466 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-09 00:24:25.994477 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-09 00:24:25.994488 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-09 00:24:25.994498 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-09 00:24:25.994509 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-09 00:24:25.994520 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-09 00:24:25.994530 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-09 00:24:25.994541 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-09 00:24:25.994552 | orchestrator | 2026-01-09 00:24:25.994563 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-09 00:24:25.994575 | orchestrator | Friday 09 January 2026 00:24:23 +0000 (0:00:01.365) 0:00:10.368 ******** 2026-01-09 00:24:25.994585 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:24:25.994596 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:24:25.994607 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:24:25.994618 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:24:25.994628 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:24:25.994639 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:24:25.994649 | orchestrator | 2026-01-09 00:24:25.994666 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-09 00:24:25.994677 | orchestrator | Friday 09 January 2026 00:24:23 +0000 (0:00:00.165) 0:00:10.534 ******** 2026-01-09 00:24:25.994687 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:24:25.994698 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:24:25.994735 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:24:25.994747 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:24:25.994766 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:24:25.994777 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:24:25.994787 | orchestrator | 2026-01-09 00:24:25.994798 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-09 00:24:25.994809 | orchestrator | Friday 09 January 2026 00:24:24 +0000 (0:00:00.186) 0:00:10.721 ******** 2026-01-09 00:24:25.994819 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:24:25.994830 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:24:25.994841 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:24:25.994851 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:24:25.994862 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:24:25.994872 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:24:25.994883 | orchestrator | 2026-01-09 00:24:25.994894 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-09 00:24:25.994904 | orchestrator | Friday 09 January 2026 00:24:24 +0000 (0:00:00.619) 0:00:11.341 ******** 2026-01-09 00:24:25.994915 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:24:25.994926 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:24:25.994936 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:24:25.994947 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:24:25.994957 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:24:25.994968 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:24:25.994978 | orchestrator | 2026-01-09 00:24:25.994989 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-09 00:24:25.995000 | orchestrator | Friday 09 January 2026 00:24:24 +0000 (0:00:00.182) 0:00:11.523 ******** 2026-01-09 00:24:25.995011 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-09 00:24:25.995022 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-09 00:24:25.995032 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-09 00:24:25.995043 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:24:25.995053 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:24:25.995064 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-09 00:24:25.995075 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-09 00:24:25.995085 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:24:25.995096 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:24:25.995106 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:24:25.995117 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-09 00:24:25.995127 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:24:25.995138 | orchestrator | 2026-01-09 00:24:25.995149 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-09 00:24:25.995160 | orchestrator | Friday 09 January 2026 00:24:25 +0000 (0:00:00.710) 0:00:12.234 ******** 2026-01-09 00:24:25.995170 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:24:25.995181 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:24:25.995191 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:24:25.995202 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:24:25.995212 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:24:25.995223 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:24:25.995234 | orchestrator | 2026-01-09 00:24:25.995244 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-09 00:24:25.995255 | orchestrator | Friday 09 January 2026 00:24:25 +0000 (0:00:00.165) 0:00:12.400 ******** 2026-01-09 00:24:25.995266 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:24:25.995276 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:24:25.995287 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:24:25.995297 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:24:25.995317 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:24:27.466008 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:24:27.466188 | orchestrator | 2026-01-09 00:24:27.466207 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-09 00:24:27.466222 | orchestrator | Friday 09 January 2026 00:24:25 +0000 (0:00:00.175) 0:00:12.575 ******** 2026-01-09 00:24:27.466259 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:24:27.466271 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:24:27.466281 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:24:27.466292 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:24:27.466302 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:24:27.466313 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:24:27.466323 | orchestrator | 2026-01-09 00:24:27.466334 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-09 00:24:27.466345 | orchestrator | Friday 09 January 2026 00:24:26 +0000 (0:00:00.190) 0:00:12.766 ******** 2026-01-09 00:24:27.466355 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:24:27.466366 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:24:27.466376 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:24:27.466387 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:24:27.466398 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:24:27.466408 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:24:27.466418 | orchestrator | 2026-01-09 00:24:27.466429 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-09 00:24:27.466440 | orchestrator | Friday 09 January 2026 00:24:26 +0000 (0:00:00.734) 0:00:13.501 ******** 2026-01-09 00:24:27.466450 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:24:27.466461 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:24:27.466471 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:24:27.466482 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:24:27.466492 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:24:27.466502 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:24:27.466513 | orchestrator | 2026-01-09 00:24:27.466523 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:24:27.466537 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 00:24:27.466553 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 00:24:27.466566 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 00:24:27.466578 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 00:24:27.466591 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 00:24:27.466604 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 00:24:27.466617 | orchestrator | 2026-01-09 00:24:27.466629 | orchestrator | 2026-01-09 00:24:27.466642 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:24:27.466655 | orchestrator | Friday 09 January 2026 00:24:27 +0000 (0:00:00.265) 0:00:13.767 ******** 2026-01-09 00:24:27.466668 | orchestrator | =============================================================================== 2026-01-09 00:24:27.466681 | orchestrator | Gathering Facts --------------------------------------------------------- 3.50s 2026-01-09 00:24:27.466693 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.38s 2026-01-09 00:24:27.466728 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.37s 2026-01-09 00:24:27.466742 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.28s 2026-01-09 00:24:27.466755 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-01-09 00:24:27.466767 | orchestrator | Do not require tty for all users ---------------------------------------- 0.83s 2026-01-09 00:24:27.466780 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.73s 2026-01-09 00:24:27.466800 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-01-09 00:24:27.466811 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-01-09 00:24:27.466841 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2026-01-09 00:24:27.466852 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2026-01-09 00:24:27.466863 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2026-01-09 00:24:27.466874 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.19s 2026-01-09 00:24:27.466885 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-01-09 00:24:27.466895 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-01-09 00:24:27.466906 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-01-09 00:24:27.466917 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-01-09 00:24:27.466927 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-01-09 00:24:27.466938 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-01-09 00:24:27.817868 | orchestrator | + osism apply --environment custom facts 2026-01-09 00:24:29.835683 | orchestrator | 2026-01-09 00:24:29 | INFO  | Trying to run play facts in environment custom 2026-01-09 00:24:39.959275 | orchestrator | 2026-01-09 00:24:39 | INFO  | Task d8b2183d-9479-4a84-a60e-217ad403e5ca (facts) was prepared for execution. 2026-01-09 00:24:39.959397 | orchestrator | 2026-01-09 00:24:39 | INFO  | It takes a moment until task d8b2183d-9479-4a84-a60e-217ad403e5ca (facts) has been started and output is visible here. 2026-01-09 00:25:26.207494 | orchestrator | 2026-01-09 00:25:26.207610 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-09 00:25:26.207626 | orchestrator | 2026-01-09 00:25:26.207637 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-09 00:25:26.207647 | orchestrator | Friday 09 January 2026 00:24:44 +0000 (0:00:00.087) 0:00:00.087 ******** 2026-01-09 00:25:26.207658 | orchestrator | ok: [testbed-manager] 2026-01-09 00:25:26.207669 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:25:26.207739 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:25:26.207750 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:25:26.207760 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:25:26.207769 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:25:26.207779 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:25:26.207789 | orchestrator | 2026-01-09 00:25:26.207799 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-09 00:25:26.207809 | orchestrator | Friday 09 January 2026 00:24:45 +0000 (0:00:01.428) 0:00:01.515 ******** 2026-01-09 00:25:26.207818 | orchestrator | ok: [testbed-manager] 2026-01-09 00:25:26.207828 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:25:26.207838 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:25:26.207847 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:25:26.207857 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:25:26.207866 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:25:26.207876 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:25:26.207886 | orchestrator | 2026-01-09 00:25:26.207895 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-09 00:25:26.207905 | orchestrator | 2026-01-09 00:25:26.207915 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-09 00:25:26.207941 | orchestrator | Friday 09 January 2026 00:24:46 +0000 (0:00:01.213) 0:00:02.728 ******** 2026-01-09 00:25:26.207952 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:26.207962 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:26.207972 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:26.208004 | orchestrator | 2026-01-09 00:25:26.208017 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-09 00:25:26.208029 | orchestrator | Friday 09 January 2026 00:24:47 +0000 (0:00:00.108) 0:00:02.837 ******** 2026-01-09 00:25:26.208040 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:26.208051 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:26.208062 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:26.208073 | orchestrator | 2026-01-09 00:25:26.208083 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-09 00:25:26.208095 | orchestrator | Friday 09 January 2026 00:24:47 +0000 (0:00:00.187) 0:00:03.024 ******** 2026-01-09 00:25:26.208106 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:26.208117 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:26.208128 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:26.208139 | orchestrator | 2026-01-09 00:25:26.208150 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-09 00:25:26.208161 | orchestrator | Friday 09 January 2026 00:24:47 +0000 (0:00:00.222) 0:00:03.247 ******** 2026-01-09 00:25:26.208173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:25:26.208187 | orchestrator | 2026-01-09 00:25:26.208198 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-09 00:25:26.208209 | orchestrator | Friday 09 January 2026 00:24:47 +0000 (0:00:00.162) 0:00:03.410 ******** 2026-01-09 00:25:26.208221 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:26.208232 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:26.208243 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:26.208254 | orchestrator | 2026-01-09 00:25:26.208265 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-09 00:25:26.208276 | orchestrator | Friday 09 January 2026 00:24:48 +0000 (0:00:00.458) 0:00:03.868 ******** 2026-01-09 00:25:26.208294 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:25:26.208312 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:25:26.208330 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:25:26.208347 | orchestrator | 2026-01-09 00:25:26.208364 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-09 00:25:26.208381 | orchestrator | Friday 09 January 2026 00:24:48 +0000 (0:00:00.146) 0:00:04.014 ******** 2026-01-09 00:25:26.208397 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:25:26.208414 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:25:26.208431 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:25:26.208447 | orchestrator | 2026-01-09 00:25:26.208463 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-09 00:25:26.208481 | orchestrator | Friday 09 January 2026 00:24:49 +0000 (0:00:01.099) 0:00:05.114 ******** 2026-01-09 00:25:26.208498 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:26.208516 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:26.208532 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:26.208550 | orchestrator | 2026-01-09 00:25:26.208567 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-09 00:25:26.208582 | orchestrator | Friday 09 January 2026 00:24:49 +0000 (0:00:00.456) 0:00:05.571 ******** 2026-01-09 00:25:26.208592 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:25:26.208601 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:25:26.208611 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:25:26.208621 | orchestrator | 2026-01-09 00:25:26.208630 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-09 00:25:26.208640 | orchestrator | Friday 09 January 2026 00:24:50 +0000 (0:00:01.090) 0:00:06.661 ******** 2026-01-09 00:25:26.208651 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:25:26.208661 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:25:26.208672 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:25:26.208712 | orchestrator | 2026-01-09 00:25:26.208731 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-09 00:25:26.208764 | orchestrator | Friday 09 January 2026 00:25:07 +0000 (0:00:16.419) 0:00:23.081 ******** 2026-01-09 00:25:26.208776 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:25:26.208786 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:25:26.208797 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:25:26.208807 | orchestrator | 2026-01-09 00:25:26.208818 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-09 00:25:26.208851 | orchestrator | Friday 09 January 2026 00:25:07 +0000 (0:00:00.093) 0:00:23.174 ******** 2026-01-09 00:25:26.208863 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:25:26.208873 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:25:26.208884 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:25:26.208895 | orchestrator | 2026-01-09 00:25:26.208905 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-09 00:25:26.208916 | orchestrator | Friday 09 January 2026 00:25:16 +0000 (0:00:08.991) 0:00:32.166 ******** 2026-01-09 00:25:26.208927 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:26.208938 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:26.208948 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:26.208959 | orchestrator | 2026-01-09 00:25:26.208970 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-09 00:25:26.208980 | orchestrator | Friday 09 January 2026 00:25:16 +0000 (0:00:00.480) 0:00:32.647 ******** 2026-01-09 00:25:26.208991 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-09 00:25:26.209002 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-09 00:25:26.209013 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-09 00:25:26.209024 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-09 00:25:26.209034 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-09 00:25:26.209045 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-09 00:25:26.209056 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-09 00:25:26.209067 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-09 00:25:26.209077 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-09 00:25:26.209088 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-09 00:25:26.209099 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-09 00:25:26.209109 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-09 00:25:26.209120 | orchestrator | 2026-01-09 00:25:26.209131 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-09 00:25:26.209142 | orchestrator | Friday 09 January 2026 00:25:20 +0000 (0:00:03.815) 0:00:36.462 ******** 2026-01-09 00:25:26.209152 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:26.209163 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:26.209174 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:26.209184 | orchestrator | 2026-01-09 00:25:26.209195 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-09 00:25:26.209206 | orchestrator | 2026-01-09 00:25:26.209217 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-09 00:25:26.209228 | orchestrator | Friday 09 January 2026 00:25:22 +0000 (0:00:01.561) 0:00:38.023 ******** 2026-01-09 00:25:26.209259 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:25:26.209271 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:25:26.209282 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:25:26.209293 | orchestrator | ok: [testbed-manager] 2026-01-09 00:25:26.209303 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:26.209314 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:26.209324 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:26.209335 | orchestrator | 2026-01-09 00:25:26.209345 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:25:26.209366 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:25:26.209378 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:25:26.209391 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:25:26.209402 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:25:26.209413 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:25:26.209424 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:25:26.209434 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:25:26.209445 | orchestrator | 2026-01-09 00:25:26.209456 | orchestrator | 2026-01-09 00:25:26.209467 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:25:26.209477 | orchestrator | Friday 09 January 2026 00:25:26 +0000 (0:00:03.983) 0:00:42.007 ******** 2026-01-09 00:25:26.209488 | orchestrator | =============================================================================== 2026-01-09 00:25:26.209499 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.42s 2026-01-09 00:25:26.209509 | orchestrator | Install required packages (Debian) -------------------------------------- 8.99s 2026-01-09 00:25:26.209520 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.98s 2026-01-09 00:25:26.209530 | orchestrator | Copy fact files --------------------------------------------------------- 3.82s 2026-01-09 00:25:26.209541 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.56s 2026-01-09 00:25:26.209551 | orchestrator | Create custom facts directory ------------------------------------------- 1.43s 2026-01-09 00:25:26.209611 | orchestrator | Copy fact file ---------------------------------------------------------- 1.21s 2026-01-09 00:25:26.475899 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.10s 2026-01-09 00:25:26.475983 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-01-09 00:25:26.475991 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-01-09 00:25:26.475997 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2026-01-09 00:25:26.476002 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-01-09 00:25:26.476007 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-01-09 00:25:26.476012 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-01-09 00:25:26.476020 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-01-09 00:25:26.476029 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-01-09 00:25:26.476036 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-01-09 00:25:26.476043 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-01-09 00:25:26.807470 | orchestrator | + osism apply bootstrap 2026-01-09 00:25:38.997654 | orchestrator | 2026-01-09 00:25:38 | INFO  | Task 06ffc954-1c2b-4fe2-a330-baec318e497c (bootstrap) was prepared for execution. 2026-01-09 00:25:38.997775 | orchestrator | 2026-01-09 00:25:38 | INFO  | It takes a moment until task 06ffc954-1c2b-4fe2-a330-baec318e497c (bootstrap) has been started and output is visible here. 2026-01-09 00:25:55.472122 | orchestrator | 2026-01-09 00:25:55.472253 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-09 00:25:55.472267 | orchestrator | 2026-01-09 00:25:55.472276 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-09 00:25:55.472284 | orchestrator | Friday 09 January 2026 00:25:43 +0000 (0:00:00.155) 0:00:00.155 ******** 2026-01-09 00:25:55.472292 | orchestrator | ok: [testbed-manager] 2026-01-09 00:25:55.472302 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:55.472309 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:55.472317 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:55.472325 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:25:55.472333 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:25:55.472340 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:25:55.472348 | orchestrator | 2026-01-09 00:25:55.472356 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-09 00:25:55.472363 | orchestrator | 2026-01-09 00:25:55.472371 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-09 00:25:55.472379 | orchestrator | Friday 09 January 2026 00:25:43 +0000 (0:00:00.250) 0:00:00.406 ******** 2026-01-09 00:25:55.472387 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:25:55.472395 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:25:55.472402 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:25:55.472410 | orchestrator | ok: [testbed-manager] 2026-01-09 00:25:55.472418 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:55.472426 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:55.472434 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:55.472441 | orchestrator | 2026-01-09 00:25:55.472449 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-09 00:25:55.472457 | orchestrator | 2026-01-09 00:25:55.472465 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-09 00:25:55.472472 | orchestrator | Friday 09 January 2026 00:25:47 +0000 (0:00:03.714) 0:00:04.121 ******** 2026-01-09 00:25:55.472481 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-09 00:25:55.472489 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-09 00:25:55.472497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-09 00:25:55.472505 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-09 00:25:55.472512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:25:55.472520 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-09 00:25:55.472528 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-09 00:25:55.472536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:25:55.472543 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-09 00:25:55.472551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:25:55.472559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-09 00:25:55.472567 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-09 00:25:55.472574 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-09 00:25:55.472582 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-09 00:25:55.472590 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-09 00:25:55.472598 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-09 00:25:55.472606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-09 00:25:55.472613 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-09 00:25:55.472621 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:25:55.472629 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-09 00:25:55.472637 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-09 00:25:55.472647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-09 00:25:55.472656 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:25:55.472710 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-09 00:25:55.472720 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-09 00:25:55.472729 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-09 00:25:55.472738 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-09 00:25:55.472747 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-09 00:25:55.472755 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-09 00:25:55.472764 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-09 00:25:55.472773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-09 00:25:55.472782 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-09 00:25:55.472791 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-09 00:25:55.472800 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-09 00:25:55.472809 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-09 00:25:55.472818 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-09 00:25:55.472826 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:25:55.472835 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:25:55.472844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-09 00:25:55.472853 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-09 00:25:55.472862 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-09 00:25:55.472871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-09 00:25:55.472880 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-09 00:25:55.472901 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-09 00:25:55.472910 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-09 00:25:55.472920 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-09 00:25:55.472943 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-09 00:25:55.472953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-09 00:25:55.472962 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:25:55.472970 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-09 00:25:55.472977 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-09 00:25:55.472985 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:25:55.472993 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-09 00:25:55.473000 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-09 00:25:55.473008 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-09 00:25:55.473016 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:25:55.473023 | orchestrator | 2026-01-09 00:25:55.473031 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-09 00:25:55.473040 | orchestrator | 2026-01-09 00:25:55.473048 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-09 00:25:55.473055 | orchestrator | Friday 09 January 2026 00:25:47 +0000 (0:00:00.503) 0:00:04.624 ******** 2026-01-09 00:25:55.473063 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:25:55.473071 | orchestrator | ok: [testbed-manager] 2026-01-09 00:25:55.473079 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:55.473086 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:25:55.473094 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:55.473102 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:25:55.473109 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:55.473117 | orchestrator | 2026-01-09 00:25:55.473125 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-09 00:25:55.473133 | orchestrator | Friday 09 January 2026 00:25:49 +0000 (0:00:01.245) 0:00:05.870 ******** 2026-01-09 00:25:55.473141 | orchestrator | ok: [testbed-manager] 2026-01-09 00:25:55.473148 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:25:55.473162 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:25:55.473169 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:25:55.473177 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:25:55.473185 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:25:55.473192 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:25:55.473200 | orchestrator | 2026-01-09 00:25:55.473208 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-09 00:25:55.473216 | orchestrator | Friday 09 January 2026 00:25:50 +0000 (0:00:01.290) 0:00:07.160 ******** 2026-01-09 00:25:55.473225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:25:55.473235 | orchestrator | 2026-01-09 00:25:55.473243 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-09 00:25:55.473251 | orchestrator | Friday 09 January 2026 00:25:50 +0000 (0:00:00.308) 0:00:07.468 ******** 2026-01-09 00:25:55.473259 | orchestrator | changed: [testbed-manager] 2026-01-09 00:25:55.473266 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:25:55.473274 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:25:55.473282 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:25:55.473290 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:25:55.473297 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:25:55.473305 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:25:55.473312 | orchestrator | 2026-01-09 00:25:55.473320 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-09 00:25:55.473328 | orchestrator | Friday 09 January 2026 00:25:52 +0000 (0:00:02.090) 0:00:09.559 ******** 2026-01-09 00:25:55.473335 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:25:55.473345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:25:55.473355 | orchestrator | 2026-01-09 00:25:55.473363 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-09 00:25:55.473371 | orchestrator | Friday 09 January 2026 00:25:53 +0000 (0:00:00.295) 0:00:09.854 ******** 2026-01-09 00:25:55.473378 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:25:55.473386 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:25:55.473394 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:25:55.473401 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:25:55.473409 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:25:55.473416 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:25:55.473424 | orchestrator | 2026-01-09 00:25:55.473432 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-09 00:25:55.473440 | orchestrator | Friday 09 January 2026 00:25:54 +0000 (0:00:01.086) 0:00:10.940 ******** 2026-01-09 00:25:55.473447 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:25:55.473455 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:25:55.473463 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:25:55.473470 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:25:55.473478 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:25:55.473486 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:25:55.473493 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:25:55.473501 | orchestrator | 2026-01-09 00:25:55.473509 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-09 00:25:55.473517 | orchestrator | Friday 09 January 2026 00:25:54 +0000 (0:00:00.641) 0:00:11.582 ******** 2026-01-09 00:25:55.473524 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:25:55.473532 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:25:55.473539 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:25:55.473547 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:25:55.473554 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:25:55.473567 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:25:55.473575 | orchestrator | ok: [testbed-manager] 2026-01-09 00:25:55.473583 | orchestrator | 2026-01-09 00:25:55.473591 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-09 00:25:55.473599 | orchestrator | Friday 09 January 2026 00:25:55 +0000 (0:00:00.471) 0:00:12.053 ******** 2026-01-09 00:25:55.473607 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:25:55.473615 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:25:55.473628 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:26:08.307168 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:26:08.307293 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:26:08.307309 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:26:08.307320 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:26:08.307331 | orchestrator | 2026-01-09 00:26:08.307344 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-09 00:26:08.307357 | orchestrator | Friday 09 January 2026 00:25:55 +0000 (0:00:00.233) 0:00:12.286 ******** 2026-01-09 00:26:08.307370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:26:08.307400 | orchestrator | 2026-01-09 00:26:08.307412 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-09 00:26:08.307424 | orchestrator | Friday 09 January 2026 00:25:55 +0000 (0:00:00.302) 0:00:12.589 ******** 2026-01-09 00:26:08.307435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:26:08.307446 | orchestrator | 2026-01-09 00:26:08.307457 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-09 00:26:08.307500 | orchestrator | Friday 09 January 2026 00:25:56 +0000 (0:00:00.308) 0:00:12.897 ******** 2026-01-09 00:26:08.307512 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.307524 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:08.307535 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:08.307546 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:08.307556 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:08.307567 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:08.307577 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:08.307588 | orchestrator | 2026-01-09 00:26:08.307599 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-09 00:26:08.307613 | orchestrator | Friday 09 January 2026 00:25:57 +0000 (0:00:01.549) 0:00:14.447 ******** 2026-01-09 00:26:08.307633 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:26:08.307650 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:26:08.307695 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:26:08.307719 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:26:08.307740 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:26:08.307761 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:26:08.307775 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:26:08.307786 | orchestrator | 2026-01-09 00:26:08.307797 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-09 00:26:08.307808 | orchestrator | Friday 09 January 2026 00:25:57 +0000 (0:00:00.244) 0:00:14.692 ******** 2026-01-09 00:26:08.307818 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.307829 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:08.307840 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:08.307851 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:08.307861 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:08.307872 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:08.307883 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:08.307893 | orchestrator | 2026-01-09 00:26:08.307904 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-09 00:26:08.307943 | orchestrator | Friday 09 January 2026 00:25:58 +0000 (0:00:00.568) 0:00:15.261 ******** 2026-01-09 00:26:08.307954 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:26:08.307965 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:26:08.307975 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:26:08.307986 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:26:08.307997 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:26:08.308007 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:26:08.308017 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:26:08.308033 | orchestrator | 2026-01-09 00:26:08.308050 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-09 00:26:08.308070 | orchestrator | Friday 09 January 2026 00:25:58 +0000 (0:00:00.359) 0:00:15.621 ******** 2026-01-09 00:26:08.308088 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.308106 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:26:08.308124 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:26:08.308138 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:26:08.308156 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:26:08.308173 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:26:08.308191 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:26:08.308208 | orchestrator | 2026-01-09 00:26:08.308226 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-09 00:26:08.308244 | orchestrator | Friday 09 January 2026 00:25:59 +0000 (0:00:00.644) 0:00:16.265 ******** 2026-01-09 00:26:08.308262 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.308274 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:26:08.308285 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:26:08.308295 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:26:08.308306 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:26:08.308317 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:26:08.308328 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:26:08.308338 | orchestrator | 2026-01-09 00:26:08.308349 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-09 00:26:08.308360 | orchestrator | Friday 09 January 2026 00:26:00 +0000 (0:00:01.128) 0:00:17.394 ******** 2026-01-09 00:26:08.308370 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.308381 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:08.308392 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:08.308402 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:08.308413 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:08.308424 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:08.308435 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:08.308446 | orchestrator | 2026-01-09 00:26:08.308465 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-09 00:26:08.308476 | orchestrator | Friday 09 January 2026 00:26:01 +0000 (0:00:01.044) 0:00:18.439 ******** 2026-01-09 00:26:08.308509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:26:08.308521 | orchestrator | 2026-01-09 00:26:08.308532 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-09 00:26:08.308542 | orchestrator | Friday 09 January 2026 00:26:02 +0000 (0:00:00.350) 0:00:18.790 ******** 2026-01-09 00:26:08.308553 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:26:08.308563 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:26:08.308574 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:26:08.308584 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:26:08.308595 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:26:08.308605 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:26:08.308616 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:26:08.308626 | orchestrator | 2026-01-09 00:26:08.308637 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-09 00:26:08.308733 | orchestrator | Friday 09 January 2026 00:26:03 +0000 (0:00:01.390) 0:00:20.180 ******** 2026-01-09 00:26:08.308753 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.308773 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:08.308785 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:08.308795 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:08.308806 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:08.308816 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:08.308827 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:08.308837 | orchestrator | 2026-01-09 00:26:08.308848 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-09 00:26:08.308859 | orchestrator | Friday 09 January 2026 00:26:03 +0000 (0:00:00.232) 0:00:20.413 ******** 2026-01-09 00:26:08.308869 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.308880 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:08.308890 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:08.308901 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:08.308912 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:08.308922 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:08.308933 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:08.308943 | orchestrator | 2026-01-09 00:26:08.308954 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-09 00:26:08.308965 | orchestrator | Friday 09 January 2026 00:26:03 +0000 (0:00:00.241) 0:00:20.654 ******** 2026-01-09 00:26:08.308975 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.308986 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:08.308996 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:08.309007 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:08.309017 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:08.309028 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:08.309038 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:08.309049 | orchestrator | 2026-01-09 00:26:08.309059 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-09 00:26:08.309070 | orchestrator | Friday 09 January 2026 00:26:04 +0000 (0:00:00.256) 0:00:20.911 ******** 2026-01-09 00:26:08.309082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:26:08.309094 | orchestrator | 2026-01-09 00:26:08.309105 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-09 00:26:08.309116 | orchestrator | Friday 09 January 2026 00:26:04 +0000 (0:00:00.324) 0:00:21.236 ******** 2026-01-09 00:26:08.309126 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.309137 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:08.309147 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:08.309158 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:08.309168 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:08.309178 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:08.309189 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:08.309199 | orchestrator | 2026-01-09 00:26:08.309210 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-09 00:26:08.309221 | orchestrator | Friday 09 January 2026 00:26:05 +0000 (0:00:00.542) 0:00:21.778 ******** 2026-01-09 00:26:08.309231 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:26:08.309242 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:26:08.309253 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:26:08.309263 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:26:08.309274 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:26:08.309284 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:26:08.309295 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:26:08.309305 | orchestrator | 2026-01-09 00:26:08.309316 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-09 00:26:08.309327 | orchestrator | Friday 09 January 2026 00:26:05 +0000 (0:00:00.274) 0:00:22.052 ******** 2026-01-09 00:26:08.309337 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:08.309356 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.309367 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:08.309378 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:08.309388 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:26:08.309399 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:26:08.309409 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:26:08.309420 | orchestrator | 2026-01-09 00:26:08.309430 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-09 00:26:08.309441 | orchestrator | Friday 09 January 2026 00:26:06 +0000 (0:00:01.062) 0:00:23.115 ******** 2026-01-09 00:26:08.309451 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.309462 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:08.309473 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:08.309484 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:08.309494 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:08.309505 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:08.309515 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:08.309526 | orchestrator | 2026-01-09 00:26:08.309542 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-09 00:26:08.309553 | orchestrator | Friday 09 January 2026 00:26:06 +0000 (0:00:00.567) 0:00:23.683 ******** 2026-01-09 00:26:08.309564 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:08.309575 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:08.309585 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:08.309596 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:08.309616 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:26:50.612412 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:26:50.612558 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:26:50.612577 | orchestrator | 2026-01-09 00:26:50.612590 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-09 00:26:50.612604 | orchestrator | Friday 09 January 2026 00:26:08 +0000 (0:00:01.337) 0:00:25.020 ******** 2026-01-09 00:26:50.612615 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:50.612684 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:50.612738 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:50.612751 | orchestrator | changed: [testbed-manager] 2026-01-09 00:26:50.612763 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:26:50.612774 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:26:50.612785 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:26:50.612796 | orchestrator | 2026-01-09 00:26:50.612807 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-09 00:26:50.612818 | orchestrator | Friday 09 January 2026 00:26:24 +0000 (0:00:16.178) 0:00:41.198 ******** 2026-01-09 00:26:50.612829 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:50.612840 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:50.612850 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:50.612862 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:50.612873 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:50.612884 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:50.612895 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:50.612905 | orchestrator | 2026-01-09 00:26:50.612916 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-09 00:26:50.612927 | orchestrator | Friday 09 January 2026 00:26:24 +0000 (0:00:00.238) 0:00:41.437 ******** 2026-01-09 00:26:50.612938 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:50.612949 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:50.612960 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:50.612970 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:50.612981 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:50.612991 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:50.613002 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:50.613013 | orchestrator | 2026-01-09 00:26:50.613024 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-09 00:26:50.613034 | orchestrator | Friday 09 January 2026 00:26:24 +0000 (0:00:00.245) 0:00:41.682 ******** 2026-01-09 00:26:50.613074 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:50.613086 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:50.613096 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:50.613107 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:50.613117 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:50.613128 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:50.613138 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:50.613149 | orchestrator | 2026-01-09 00:26:50.613159 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-09 00:26:50.613170 | orchestrator | Friday 09 January 2026 00:26:25 +0000 (0:00:00.245) 0:00:41.927 ******** 2026-01-09 00:26:50.613184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:26:50.613197 | orchestrator | 2026-01-09 00:26:50.613208 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-09 00:26:50.613218 | orchestrator | Friday 09 January 2026 00:26:25 +0000 (0:00:00.321) 0:00:42.249 ******** 2026-01-09 00:26:50.613229 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:50.613240 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:50.613250 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:50.613261 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:50.613271 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:50.613282 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:50.613297 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:50.613316 | orchestrator | 2026-01-09 00:26:50.613334 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-09 00:26:50.613351 | orchestrator | Friday 09 January 2026 00:26:27 +0000 (0:00:01.838) 0:00:44.088 ******** 2026-01-09 00:26:50.613371 | orchestrator | changed: [testbed-manager] 2026-01-09 00:26:50.613388 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:26:50.613406 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:26:50.613417 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:26:50.613427 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:26:50.613438 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:26:50.613448 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:26:50.613458 | orchestrator | 2026-01-09 00:26:50.613469 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-09 00:26:50.613480 | orchestrator | Friday 09 January 2026 00:26:28 +0000 (0:00:01.097) 0:00:45.186 ******** 2026-01-09 00:26:50.613490 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:50.613502 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:50.613512 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:50.613523 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:50.613533 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:50.613544 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:50.613554 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:50.613565 | orchestrator | 2026-01-09 00:26:50.613576 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-09 00:26:50.613587 | orchestrator | Friday 09 January 2026 00:26:29 +0000 (0:00:00.813) 0:00:45.999 ******** 2026-01-09 00:26:50.613598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:26:50.613611 | orchestrator | 2026-01-09 00:26:50.613621 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-09 00:26:50.613633 | orchestrator | Friday 09 January 2026 00:26:29 +0000 (0:00:00.322) 0:00:46.322 ******** 2026-01-09 00:26:50.613666 | orchestrator | changed: [testbed-manager] 2026-01-09 00:26:50.613677 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:26:50.613688 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:26:50.613698 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:26:50.613709 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:26:50.613729 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:26:50.613740 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:26:50.613751 | orchestrator | 2026-01-09 00:26:50.613781 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-09 00:26:50.613792 | orchestrator | Friday 09 January 2026 00:26:30 +0000 (0:00:01.102) 0:00:47.424 ******** 2026-01-09 00:26:50.613802 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:26:50.613813 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:26:50.613823 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:26:50.613834 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:26:50.613844 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:26:50.613855 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:26:50.613865 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:26:50.613875 | orchestrator | 2026-01-09 00:26:50.613886 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-09 00:26:50.613897 | orchestrator | Friday 09 January 2026 00:26:30 +0000 (0:00:00.239) 0:00:47.663 ******** 2026-01-09 00:26:50.613926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:26:50.613937 | orchestrator | 2026-01-09 00:26:50.613948 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-09 00:26:50.613959 | orchestrator | Friday 09 January 2026 00:26:31 +0000 (0:00:00.319) 0:00:47.983 ******** 2026-01-09 00:26:50.613969 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:50.613980 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:50.613990 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:50.614001 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:50.614011 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:50.614139 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:50.614152 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:50.614162 | orchestrator | 2026-01-09 00:26:50.614173 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-09 00:26:50.614184 | orchestrator | Friday 09 January 2026 00:26:33 +0000 (0:00:01.862) 0:00:49.846 ******** 2026-01-09 00:26:50.614195 | orchestrator | changed: [testbed-manager] 2026-01-09 00:26:50.614205 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:26:50.614216 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:26:50.614227 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:26:50.614237 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:26:50.614248 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:26:50.614258 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:26:50.614269 | orchestrator | 2026-01-09 00:26:50.614279 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-09 00:26:50.614290 | orchestrator | Friday 09 January 2026 00:26:34 +0000 (0:00:01.282) 0:00:51.129 ******** 2026-01-09 00:26:50.614301 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:26:50.614311 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:26:50.614321 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:26:50.614332 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:26:50.614343 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:26:50.614353 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:26:50.614364 | orchestrator | changed: [testbed-manager] 2026-01-09 00:26:50.614374 | orchestrator | 2026-01-09 00:26:50.614385 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-09 00:26:50.614396 | orchestrator | Friday 09 January 2026 00:26:47 +0000 (0:00:13.106) 0:01:04.235 ******** 2026-01-09 00:26:50.614407 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:50.614417 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:50.614428 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:50.614438 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:50.614449 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:50.614459 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:50.614480 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:50.614491 | orchestrator | 2026-01-09 00:26:50.614501 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-09 00:26:50.614512 | orchestrator | Friday 09 January 2026 00:26:48 +0000 (0:00:01.157) 0:01:05.392 ******** 2026-01-09 00:26:50.614523 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:50.614533 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:50.614544 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:50.614554 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:50.614564 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:50.614575 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:50.614585 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:50.614596 | orchestrator | 2026-01-09 00:26:50.614606 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-09 00:26:50.614624 | orchestrator | Friday 09 January 2026 00:26:49 +0000 (0:00:01.106) 0:01:06.498 ******** 2026-01-09 00:26:50.614669 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:50.614682 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:50.614692 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:50.614703 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:50.614713 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:50.614724 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:50.614734 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:50.614745 | orchestrator | 2026-01-09 00:26:50.614755 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-09 00:26:50.614766 | orchestrator | Friday 09 January 2026 00:26:50 +0000 (0:00:00.261) 0:01:06.760 ******** 2026-01-09 00:26:50.614777 | orchestrator | ok: [testbed-manager] 2026-01-09 00:26:50.614787 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:26:50.614798 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:26:50.614808 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:26:50.614819 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:26:50.614829 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:26:50.614839 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:26:50.614850 | orchestrator | 2026-01-09 00:26:50.614860 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-09 00:26:50.614871 | orchestrator | Friday 09 January 2026 00:26:50 +0000 (0:00:00.243) 0:01:07.004 ******** 2026-01-09 00:26:50.614889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:26:50.614900 | orchestrator | 2026-01-09 00:26:50.614921 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-09 00:29:08.108856 | orchestrator | Friday 09 January 2026 00:26:50 +0000 (0:00:00.324) 0:01:07.328 ******** 2026-01-09 00:29:08.108973 | orchestrator | ok: [testbed-manager] 2026-01-09 00:29:08.108987 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:08.108995 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:08.109004 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:08.109011 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:08.109020 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:08.109028 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:08.109036 | orchestrator | 2026-01-09 00:29:08.109046 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-09 00:29:08.109056 | orchestrator | Friday 09 January 2026 00:26:52 +0000 (0:00:01.965) 0:01:09.294 ******** 2026-01-09 00:29:08.109065 | orchestrator | changed: [testbed-manager] 2026-01-09 00:29:08.109074 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:29:08.109083 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:29:08.109091 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:29:08.109099 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:29:08.109108 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:29:08.109116 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:29:08.109125 | orchestrator | 2026-01-09 00:29:08.109162 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-09 00:29:08.109173 | orchestrator | Friday 09 January 2026 00:26:53 +0000 (0:00:00.649) 0:01:09.944 ******** 2026-01-09 00:29:08.109180 | orchestrator | ok: [testbed-manager] 2026-01-09 00:29:08.109188 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:08.109195 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:08.109202 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:08.109210 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:08.109217 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:08.109225 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:08.109233 | orchestrator | 2026-01-09 00:29:08.109241 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-09 00:29:08.109249 | orchestrator | Friday 09 January 2026 00:26:53 +0000 (0:00:00.230) 0:01:10.174 ******** 2026-01-09 00:29:08.109256 | orchestrator | ok: [testbed-manager] 2026-01-09 00:29:08.109267 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:08.109274 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:08.109282 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:08.109289 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:08.109296 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:08.109303 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:08.109311 | orchestrator | 2026-01-09 00:29:08.109318 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-09 00:29:08.109326 | orchestrator | Friday 09 January 2026 00:26:54 +0000 (0:00:01.269) 0:01:11.444 ******** 2026-01-09 00:29:08.109334 | orchestrator | changed: [testbed-manager] 2026-01-09 00:29:08.109342 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:29:08.109349 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:29:08.109357 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:29:08.109364 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:29:08.109372 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:29:08.109380 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:29:08.109388 | orchestrator | 2026-01-09 00:29:08.109396 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-09 00:29:08.109405 | orchestrator | Friday 09 January 2026 00:26:56 +0000 (0:00:01.987) 0:01:13.431 ******** 2026-01-09 00:29:08.109413 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:08.109420 | orchestrator | ok: [testbed-manager] 2026-01-09 00:29:08.109428 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:08.109435 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:08.109444 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:08.109453 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:08.109461 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:08.109470 | orchestrator | 2026-01-09 00:29:08.109481 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-09 00:29:08.109492 | orchestrator | Friday 09 January 2026 00:26:59 +0000 (0:00:02.690) 0:01:16.122 ******** 2026-01-09 00:29:08.109501 | orchestrator | ok: [testbed-manager] 2026-01-09 00:29:08.109511 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:08.109520 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:08.109527 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:08.109535 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:08.109543 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:08.109552 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:08.109560 | orchestrator | 2026-01-09 00:29:08.109567 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-09 00:29:08.109575 | orchestrator | Friday 09 January 2026 00:27:35 +0000 (0:00:36.342) 0:01:52.464 ******** 2026-01-09 00:29:08.109613 | orchestrator | changed: [testbed-manager] 2026-01-09 00:29:08.109620 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:29:08.109628 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:29:08.109636 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:29:08.109644 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:29:08.109652 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:29:08.109660 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:29:08.109679 | orchestrator | 2026-01-09 00:29:08.109687 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-09 00:29:08.109695 | orchestrator | Friday 09 January 2026 00:28:50 +0000 (0:01:14.462) 0:03:06.926 ******** 2026-01-09 00:29:08.109702 | orchestrator | ok: [testbed-manager] 2026-01-09 00:29:08.109710 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:08.109717 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:08.109724 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:08.109732 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:08.109739 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:08.109747 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:08.109755 | orchestrator | 2026-01-09 00:29:08.109763 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-09 00:29:08.109770 | orchestrator | Friday 09 January 2026 00:28:52 +0000 (0:00:02.111) 0:03:09.038 ******** 2026-01-09 00:29:08.109777 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:08.109784 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:08.109792 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:08.109799 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:08.109822 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:08.109831 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:08.109839 | orchestrator | changed: [testbed-manager] 2026-01-09 00:29:08.109848 | orchestrator | 2026-01-09 00:29:08.109857 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-09 00:29:08.109865 | orchestrator | Friday 09 January 2026 00:29:05 +0000 (0:00:13.485) 0:03:22.524 ******** 2026-01-09 00:29:08.109904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-09 00:29:08.109923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-09 00:29:08.109935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-09 00:29:08.109945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-09 00:29:08.109955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-09 00:29:08.109963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-09 00:29:08.109982 | orchestrator | 2026-01-09 00:29:08.109990 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-09 00:29:08.109998 | orchestrator | Friday 09 January 2026 00:29:06 +0000 (0:00:00.448) 0:03:22.973 ******** 2026-01-09 00:29:08.110009 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-09 00:29:08.110079 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-09 00:29:08.110090 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:29:08.110097 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:29:08.110105 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-09 00:29:08.110113 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-09 00:29:08.110121 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:29:08.110128 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:29:08.110136 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-09 00:29:08.110144 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-09 00:29:08.110153 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-09 00:29:08.110161 | orchestrator | 2026-01-09 00:29:08.110169 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-09 00:29:08.110177 | orchestrator | Friday 09 January 2026 00:29:07 +0000 (0:00:01.748) 0:03:24.721 ******** 2026-01-09 00:29:08.110185 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-09 00:29:08.110192 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-09 00:29:08.110197 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-09 00:29:08.110202 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-09 00:29:08.110207 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-09 00:29:08.110220 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-09 00:29:14.892506 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-09 00:29:14.892666 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-09 00:29:14.892685 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-09 00:29:14.892697 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-09 00:29:14.892708 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-09 00:29:14.892720 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-09 00:29:14.892730 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-09 00:29:14.892742 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-09 00:29:14.892752 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-09 00:29:14.892765 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-09 00:29:14.892784 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-09 00:29:14.892802 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-09 00:29:14.892821 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-09 00:29:14.892867 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-09 00:29:14.892887 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-09 00:29:14.892907 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-09 00:29:14.892927 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:29:14.892947 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-09 00:29:14.892958 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-09 00:29:14.892992 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-09 00:29:14.893012 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-09 00:29:14.893030 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-09 00:29:14.893080 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-09 00:29:14.893097 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-09 00:29:14.893111 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-09 00:29:14.893124 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-09 00:29:14.893136 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-09 00:29:14.893148 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-09 00:29:14.893161 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-09 00:29:14.893174 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-09 00:29:14.893186 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-09 00:29:14.893198 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-09 00:29:14.893210 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:29:14.893222 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-09 00:29:14.893234 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-09 00:29:14.893246 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-09 00:29:14.893259 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:29:14.893270 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:29:14.893282 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-09 00:29:14.893295 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-09 00:29:14.893313 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-09 00:29:14.893327 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-09 00:29:14.893339 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-09 00:29:14.893372 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-09 00:29:14.893387 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-09 00:29:14.893397 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-09 00:29:14.893418 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-09 00:29:14.893429 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-09 00:29:14.893439 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-09 00:29:14.893450 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-09 00:29:14.893460 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-09 00:29:14.893471 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-09 00:29:14.893482 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-09 00:29:14.893492 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-09 00:29:14.893502 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-09 00:29:14.893513 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-09 00:29:14.893524 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-09 00:29:14.893534 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-09 00:29:14.893545 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-09 00:29:14.893556 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-09 00:29:14.893566 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-09 00:29:14.893604 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-09 00:29:14.893615 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-09 00:29:14.893625 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-09 00:29:14.893636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-09 00:29:14.893646 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-09 00:29:14.893657 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-09 00:29:14.893668 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-09 00:29:14.893678 | orchestrator | 2026-01-09 00:29:14.893690 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-09 00:29:14.893700 | orchestrator | Friday 09 January 2026 00:29:12 +0000 (0:00:04.838) 0:03:29.560 ******** 2026-01-09 00:29:14.893711 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-09 00:29:14.893722 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-09 00:29:14.893732 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-09 00:29:14.893743 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-09 00:29:14.893754 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-09 00:29:14.893764 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-09 00:29:14.893775 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-09 00:29:14.893786 | orchestrator | 2026-01-09 00:29:14.893796 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-09 00:29:14.893807 | orchestrator | Friday 09 January 2026 00:29:14 +0000 (0:00:01.518) 0:03:31.079 ******** 2026-01-09 00:29:14.893818 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-09 00:29:14.893836 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:29:14.893847 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-09 00:29:14.893857 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:29:14.893868 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-09 00:29:14.893878 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:29:14.893894 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-09 00:29:14.893905 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:29:14.893917 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-09 00:29:14.893935 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-09 00:29:14.893964 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-09 00:29:31.250475 | orchestrator | 2026-01-09 00:29:31.250619 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-09 00:29:31.250632 | orchestrator | Friday 09 January 2026 00:29:14 +0000 (0:00:00.527) 0:03:31.606 ******** 2026-01-09 00:29:31.250639 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-09 00:29:31.250649 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-09 00:29:31.250656 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:29:31.250663 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:29:31.250670 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-09 00:29:31.250676 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-09 00:29:31.250682 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:29:31.250689 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:29:31.250695 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-09 00:29:31.250701 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-09 00:29:31.250708 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-09 00:29:31.250714 | orchestrator | 2026-01-09 00:29:31.250720 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-09 00:29:31.250727 | orchestrator | Friday 09 January 2026 00:29:16 +0000 (0:00:01.633) 0:03:33.240 ******** 2026-01-09 00:29:31.250733 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-09 00:29:31.250739 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:29:31.250745 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-09 00:29:31.250751 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:29:31.250758 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-09 00:29:31.250764 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:29:31.250770 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-09 00:29:31.250776 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:29:31.250782 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-09 00:29:31.250789 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-09 00:29:31.250795 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-09 00:29:31.250824 | orchestrator | 2026-01-09 00:29:31.250832 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-09 00:29:31.250838 | orchestrator | Friday 09 January 2026 00:29:19 +0000 (0:00:02.591) 0:03:35.832 ******** 2026-01-09 00:29:31.250849 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:29:31.250859 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:29:31.250868 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:29:31.250878 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:29:31.250888 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:29:31.250899 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:29:31.250909 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:29:31.250920 | orchestrator | 2026-01-09 00:29:31.250930 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-09 00:29:31.250940 | orchestrator | Friday 09 January 2026 00:29:19 +0000 (0:00:00.322) 0:03:36.154 ******** 2026-01-09 00:29:31.250951 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:31.250962 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:31.250969 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:31.250977 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:31.250984 | orchestrator | ok: [testbed-manager] 2026-01-09 00:29:31.250991 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:31.250999 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:31.251006 | orchestrator | 2026-01-09 00:29:31.251013 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-09 00:29:31.251020 | orchestrator | Friday 09 January 2026 00:29:24 +0000 (0:00:05.541) 0:03:41.696 ******** 2026-01-09 00:29:31.251028 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-09 00:29:31.251035 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-09 00:29:31.251042 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:29:31.251049 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-09 00:29:31.251057 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:29:31.251064 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:29:31.251071 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-09 00:29:31.251078 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:29:31.251085 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-09 00:29:31.251092 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-09 00:29:31.251099 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:29:31.251106 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:29:31.251128 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-09 00:29:31.251136 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:29:31.251146 | orchestrator | 2026-01-09 00:29:31.251156 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-09 00:29:31.251167 | orchestrator | Friday 09 January 2026 00:29:25 +0000 (0:00:00.246) 0:03:41.942 ******** 2026-01-09 00:29:31.251178 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-09 00:29:31.251188 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-09 00:29:31.251198 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-09 00:29:31.251223 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-09 00:29:31.251232 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-09 00:29:31.251238 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-09 00:29:31.251244 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-09 00:29:31.251250 | orchestrator | 2026-01-09 00:29:31.251256 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-09 00:29:31.251262 | orchestrator | Friday 09 January 2026 00:29:26 +0000 (0:00:01.131) 0:03:43.074 ******** 2026-01-09 00:29:31.251271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:29:31.251280 | orchestrator | 2026-01-09 00:29:31.251297 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-09 00:29:31.251304 | orchestrator | Friday 09 January 2026 00:29:26 +0000 (0:00:00.491) 0:03:43.565 ******** 2026-01-09 00:29:31.251310 | orchestrator | ok: [testbed-manager] 2026-01-09 00:29:31.251316 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:31.251322 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:31.251328 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:31.251334 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:31.251340 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:31.251346 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:31.251352 | orchestrator | 2026-01-09 00:29:31.251358 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-09 00:29:31.251365 | orchestrator | Friday 09 January 2026 00:29:28 +0000 (0:00:01.338) 0:03:44.904 ******** 2026-01-09 00:29:31.251371 | orchestrator | ok: [testbed-manager] 2026-01-09 00:29:31.251377 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:31.251383 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:31.251389 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:31.251395 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:31.251401 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:31.251407 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:31.251413 | orchestrator | 2026-01-09 00:29:31.251419 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-09 00:29:31.251425 | orchestrator | Friday 09 January 2026 00:29:28 +0000 (0:00:00.666) 0:03:45.571 ******** 2026-01-09 00:29:31.251431 | orchestrator | changed: [testbed-manager] 2026-01-09 00:29:31.251437 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:29:31.251443 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:29:31.251449 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:29:31.251455 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:29:31.251462 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:29:31.251468 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:29:31.251474 | orchestrator | 2026-01-09 00:29:31.251480 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-09 00:29:31.251486 | orchestrator | Friday 09 January 2026 00:29:29 +0000 (0:00:00.673) 0:03:46.244 ******** 2026-01-09 00:29:31.251492 | orchestrator | ok: [testbed-manager] 2026-01-09 00:29:31.251498 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:31.251504 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:31.251510 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:31.251516 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:31.251522 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:31.251528 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:31.251534 | orchestrator | 2026-01-09 00:29:31.251540 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-09 00:29:31.251547 | orchestrator | Friday 09 January 2026 00:29:30 +0000 (0:00:00.638) 0:03:46.883 ******** 2026-01-09 00:29:31.251622 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767917135.679015, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:31.251632 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767917130.9692616, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:31.251646 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767917106.1677604, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:31.251671 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767917126.3432045, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:36.357494 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767917107.4143465, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:36.357644 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767917124.9274423, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:36.357657 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767917111.793366, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:36.357666 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:36.357673 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:36.357707 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:36.357715 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:36.357741 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:36.357749 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:36.357757 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 00:29:36.357766 | orchestrator | 2026-01-09 00:29:36.357774 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-09 00:29:36.357786 | orchestrator | Friday 09 January 2026 00:29:31 +0000 (0:00:01.082) 0:03:47.965 ******** 2026-01-09 00:29:36.357795 | orchestrator | changed: [testbed-manager] 2026-01-09 00:29:36.357804 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:29:36.357814 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:29:36.357824 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:29:36.357831 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:29:36.357841 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:29:36.357850 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:29:36.357857 | orchestrator | 2026-01-09 00:29:36.357865 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-09 00:29:36.357873 | orchestrator | Friday 09 January 2026 00:29:32 +0000 (0:00:01.179) 0:03:49.145 ******** 2026-01-09 00:29:36.357882 | orchestrator | changed: [testbed-manager] 2026-01-09 00:29:36.357889 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:29:36.357906 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:29:36.357914 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:29:36.357922 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:29:36.357930 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:29:36.357938 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:29:36.357946 | orchestrator | 2026-01-09 00:29:36.357955 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-09 00:29:36.357963 | orchestrator | Friday 09 January 2026 00:29:33 +0000 (0:00:01.215) 0:03:50.360 ******** 2026-01-09 00:29:36.357972 | orchestrator | changed: [testbed-manager] 2026-01-09 00:29:36.357980 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:29:36.357989 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:29:36.357997 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:29:36.358006 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:29:36.358105 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:29:36.358115 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:29:36.358121 | orchestrator | 2026-01-09 00:29:36.358128 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-09 00:29:36.358133 | orchestrator | Friday 09 January 2026 00:29:34 +0000 (0:00:01.257) 0:03:51.618 ******** 2026-01-09 00:29:36.358139 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:29:36.358144 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:29:36.358148 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:29:36.358154 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:29:36.358159 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:29:36.358163 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:29:36.358168 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:29:36.358173 | orchestrator | 2026-01-09 00:29:36.358183 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-09 00:29:36.358188 | orchestrator | Friday 09 January 2026 00:29:35 +0000 (0:00:00.282) 0:03:51.901 ******** 2026-01-09 00:29:36.358193 | orchestrator | ok: [testbed-manager] 2026-01-09 00:29:36.358199 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:29:36.358204 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:29:36.358209 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:29:36.358213 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:29:36.358218 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:29:36.358223 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:29:36.358227 | orchestrator | 2026-01-09 00:29:36.358232 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-09 00:29:36.358237 | orchestrator | Friday 09 January 2026 00:29:35 +0000 (0:00:00.744) 0:03:52.645 ******** 2026-01-09 00:29:36.358245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:29:36.358252 | orchestrator | 2026-01-09 00:29:36.358257 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-09 00:29:36.358270 | orchestrator | Friday 09 January 2026 00:29:36 +0000 (0:00:00.430) 0:03:53.076 ******** 2026-01-09 00:30:55.822421 | orchestrator | ok: [testbed-manager] 2026-01-09 00:30:55.822595 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:30:55.822611 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:30:55.822622 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:30:55.822633 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:30:55.822644 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:30:55.822655 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:30:55.822666 | orchestrator | 2026-01-09 00:30:55.822679 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-09 00:30:55.822692 | orchestrator | Friday 09 January 2026 00:29:45 +0000 (0:00:09.459) 0:04:02.535 ******** 2026-01-09 00:30:55.822703 | orchestrator | ok: [testbed-manager] 2026-01-09 00:30:55.822715 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:30:55.822726 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:30:55.822766 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:30:55.822778 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:30:55.822788 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:30:55.822799 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:30:55.822810 | orchestrator | 2026-01-09 00:30:55.822821 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-09 00:30:55.822831 | orchestrator | Friday 09 January 2026 00:29:47 +0000 (0:00:01.427) 0:04:03.963 ******** 2026-01-09 00:30:55.822842 | orchestrator | ok: [testbed-manager] 2026-01-09 00:30:55.822853 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:30:55.822863 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:30:55.822874 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:30:55.822885 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:30:55.822896 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:30:55.822906 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:30:55.822917 | orchestrator | 2026-01-09 00:30:55.822928 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-09 00:30:55.822941 | orchestrator | Friday 09 January 2026 00:29:48 +0000 (0:00:01.193) 0:04:05.157 ******** 2026-01-09 00:30:55.822954 | orchestrator | ok: [testbed-manager] 2026-01-09 00:30:55.822966 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:30:55.822978 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:30:55.822990 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:30:55.823002 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:30:55.823015 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:30:55.823027 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:30:55.823040 | orchestrator | 2026-01-09 00:30:55.823052 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-09 00:30:55.823067 | orchestrator | Friday 09 January 2026 00:29:48 +0000 (0:00:00.340) 0:04:05.498 ******** 2026-01-09 00:30:55.823079 | orchestrator | ok: [testbed-manager] 2026-01-09 00:30:55.823092 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:30:55.823104 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:30:55.823116 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:30:55.823127 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:30:55.823138 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:30:55.823148 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:30:55.823159 | orchestrator | 2026-01-09 00:30:55.823169 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-09 00:30:55.823180 | orchestrator | Friday 09 January 2026 00:29:49 +0000 (0:00:00.310) 0:04:05.808 ******** 2026-01-09 00:30:55.823191 | orchestrator | ok: [testbed-manager] 2026-01-09 00:30:55.823202 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:30:55.823212 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:30:55.823223 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:30:55.823233 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:30:55.823244 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:30:55.823254 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:30:55.823265 | orchestrator | 2026-01-09 00:30:55.823275 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-09 00:30:55.823286 | orchestrator | Friday 09 January 2026 00:29:49 +0000 (0:00:00.284) 0:04:06.093 ******** 2026-01-09 00:30:55.823297 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:30:55.823307 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:30:55.823318 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:30:55.823329 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:30:55.823340 | orchestrator | ok: [testbed-manager] 2026-01-09 00:30:55.823350 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:30:55.823361 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:30:55.823371 | orchestrator | 2026-01-09 00:30:55.823382 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-09 00:30:55.823393 | orchestrator | Friday 09 January 2026 00:29:54 +0000 (0:00:05.325) 0:04:11.418 ******** 2026-01-09 00:30:55.823406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:30:55.823448 | orchestrator | 2026-01-09 00:30:55.823460 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-09 00:30:55.823486 | orchestrator | Friday 09 January 2026 00:29:55 +0000 (0:00:00.388) 0:04:11.807 ******** 2026-01-09 00:30:55.823497 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-09 00:30:55.823508 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-09 00:30:55.823519 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:30:55.823530 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-09 00:30:55.823540 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-09 00:30:55.823551 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:30:55.823562 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-09 00:30:55.823572 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-09 00:30:55.823583 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-09 00:30:55.823594 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-09 00:30:55.823604 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:30:55.823615 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-09 00:30:55.823625 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:30:55.823636 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-09 00:30:55.823647 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-09 00:30:55.823658 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-09 00:30:55.823686 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:30:55.823697 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:30:55.823708 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-09 00:30:55.823719 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-09 00:30:55.823729 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:30:55.823740 | orchestrator | 2026-01-09 00:30:55.823751 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-09 00:30:55.823762 | orchestrator | Friday 09 January 2026 00:29:55 +0000 (0:00:00.366) 0:04:12.173 ******** 2026-01-09 00:30:55.823773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:30:55.823785 | orchestrator | 2026-01-09 00:30:55.823796 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-09 00:30:55.823806 | orchestrator | Friday 09 January 2026 00:29:55 +0000 (0:00:00.442) 0:04:12.616 ******** 2026-01-09 00:30:55.823817 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-09 00:30:55.823828 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:30:55.823838 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-09 00:30:55.823849 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-09 00:30:55.823860 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:30:55.823870 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-09 00:30:55.823881 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:30:55.823891 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-09 00:30:55.823902 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:30:55.823913 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-09 00:30:55.823923 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:30:55.823934 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:30:55.823944 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-09 00:30:55.823955 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:30:55.823965 | orchestrator | 2026-01-09 00:30:55.823984 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-09 00:30:55.823995 | orchestrator | Friday 09 January 2026 00:29:56 +0000 (0:00:00.307) 0:04:12.923 ******** 2026-01-09 00:30:55.824006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:30:55.824017 | orchestrator | 2026-01-09 00:30:55.824027 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-09 00:30:55.824038 | orchestrator | Friday 09 January 2026 00:29:56 +0000 (0:00:00.415) 0:04:13.338 ******** 2026-01-09 00:30:55.824048 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:30:55.824059 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:30:55.824070 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:30:55.824080 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:30:55.824091 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:30:55.824102 | orchestrator | changed: [testbed-manager] 2026-01-09 00:30:55.824112 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:30:55.824123 | orchestrator | 2026-01-09 00:30:55.824133 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-09 00:30:55.824144 | orchestrator | Friday 09 January 2026 00:30:30 +0000 (0:00:34.245) 0:04:47.584 ******** 2026-01-09 00:30:55.824155 | orchestrator | changed: [testbed-manager] 2026-01-09 00:30:55.824165 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:30:55.824176 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:30:55.824186 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:30:55.824197 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:30:55.824208 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:30:55.824218 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:30:55.824228 | orchestrator | 2026-01-09 00:30:55.824239 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-09 00:30:55.824250 | orchestrator | Friday 09 January 2026 00:30:39 +0000 (0:00:08.866) 0:04:56.450 ******** 2026-01-09 00:30:55.824260 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:30:55.824271 | orchestrator | changed: [testbed-manager] 2026-01-09 00:30:55.824281 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:30:55.824292 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:30:55.824303 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:30:55.824313 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:30:55.824324 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:30:55.824335 | orchestrator | 2026-01-09 00:30:55.824346 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-09 00:30:55.824356 | orchestrator | Friday 09 January 2026 00:30:47 +0000 (0:00:08.208) 0:05:04.659 ******** 2026-01-09 00:30:55.824367 | orchestrator | ok: [testbed-manager] 2026-01-09 00:30:55.824378 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:30:55.824388 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:30:55.824399 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:30:55.824410 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:30:55.824420 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:30:55.824450 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:30:55.824461 | orchestrator | 2026-01-09 00:30:55.824472 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-09 00:30:55.824483 | orchestrator | Friday 09 January 2026 00:30:49 +0000 (0:00:01.881) 0:05:06.540 ******** 2026-01-09 00:30:55.824494 | orchestrator | changed: [testbed-manager] 2026-01-09 00:30:55.824505 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:30:55.824515 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:30:55.824525 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:30:55.824536 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:30:55.824547 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:30:55.824557 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:30:55.824568 | orchestrator | 2026-01-09 00:30:55.824586 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-09 00:31:07.529754 | orchestrator | Friday 09 January 2026 00:30:55 +0000 (0:00:05.992) 0:05:12.533 ******** 2026-01-09 00:31:07.529861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:31:07.529875 | orchestrator | 2026-01-09 00:31:07.529884 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-09 00:31:07.529893 | orchestrator | Friday 09 January 2026 00:30:56 +0000 (0:00:00.576) 0:05:13.110 ******** 2026-01-09 00:31:07.529902 | orchestrator | changed: [testbed-manager] 2026-01-09 00:31:07.529910 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:31:07.529919 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:31:07.529927 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:31:07.529935 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:31:07.529946 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:31:07.529959 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:31:07.529972 | orchestrator | 2026-01-09 00:31:07.529986 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-09 00:31:07.530000 | orchestrator | Friday 09 January 2026 00:30:57 +0000 (0:00:00.770) 0:05:13.881 ******** 2026-01-09 00:31:07.530009 | orchestrator | ok: [testbed-manager] 2026-01-09 00:31:07.530074 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:31:07.530090 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:31:07.530103 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:31:07.530111 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:31:07.530119 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:31:07.530144 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:31:07.530152 | orchestrator | 2026-01-09 00:31:07.530160 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-09 00:31:07.530168 | orchestrator | Friday 09 January 2026 00:30:58 +0000 (0:00:01.723) 0:05:15.604 ******** 2026-01-09 00:31:07.530177 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:31:07.530185 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:31:07.530193 | orchestrator | changed: [testbed-manager] 2026-01-09 00:31:07.530206 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:31:07.530220 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:31:07.530231 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:31:07.530239 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:31:07.530247 | orchestrator | 2026-01-09 00:31:07.530255 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-09 00:31:07.530264 | orchestrator | Friday 09 January 2026 00:30:59 +0000 (0:00:00.872) 0:05:16.477 ******** 2026-01-09 00:31:07.530271 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:31:07.530279 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:31:07.530289 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:31:07.530298 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:31:07.530309 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:31:07.530324 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:31:07.530339 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:31:07.530350 | orchestrator | 2026-01-09 00:31:07.530361 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-09 00:31:07.530376 | orchestrator | Friday 09 January 2026 00:31:00 +0000 (0:00:00.297) 0:05:16.774 ******** 2026-01-09 00:31:07.530387 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:31:07.530400 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:31:07.530433 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:31:07.530444 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:31:07.530452 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:31:07.530462 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:31:07.530471 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:31:07.530480 | orchestrator | 2026-01-09 00:31:07.530490 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-09 00:31:07.530517 | orchestrator | Friday 09 January 2026 00:31:00 +0000 (0:00:00.484) 0:05:17.259 ******** 2026-01-09 00:31:07.530528 | orchestrator | ok: [testbed-manager] 2026-01-09 00:31:07.530538 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:31:07.530547 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:31:07.530556 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:31:07.530566 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:31:07.530575 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:31:07.530584 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:31:07.530593 | orchestrator | 2026-01-09 00:31:07.530602 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-09 00:31:07.530611 | orchestrator | Friday 09 January 2026 00:31:00 +0000 (0:00:00.280) 0:05:17.540 ******** 2026-01-09 00:31:07.530624 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:31:07.530637 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:31:07.530649 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:31:07.530671 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:31:07.530681 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:31:07.530689 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:31:07.530697 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:31:07.530705 | orchestrator | 2026-01-09 00:31:07.530713 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-09 00:31:07.530722 | orchestrator | Friday 09 January 2026 00:31:01 +0000 (0:00:00.302) 0:05:17.842 ******** 2026-01-09 00:31:07.530730 | orchestrator | ok: [testbed-manager] 2026-01-09 00:31:07.530737 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:31:07.530747 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:31:07.530761 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:31:07.530774 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:31:07.530785 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:31:07.530793 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:31:07.530802 | orchestrator | 2026-01-09 00:31:07.530815 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-09 00:31:07.530829 | orchestrator | Friday 09 January 2026 00:31:01 +0000 (0:00:00.332) 0:05:18.175 ******** 2026-01-09 00:31:07.530842 | orchestrator | ok: [testbed-manager] =>  2026-01-09 00:31:07.530860 | orchestrator |  docker_version: 5:27.5.1 2026-01-09 00:31:07.530877 | orchestrator | ok: [testbed-node-3] =>  2026-01-09 00:31:07.530890 | orchestrator |  docker_version: 5:27.5.1 2026-01-09 00:31:07.530903 | orchestrator | ok: [testbed-node-4] =>  2026-01-09 00:31:07.530916 | orchestrator |  docker_version: 5:27.5.1 2026-01-09 00:31:07.530928 | orchestrator | ok: [testbed-node-5] =>  2026-01-09 00:31:07.530942 | orchestrator |  docker_version: 5:27.5.1 2026-01-09 00:31:07.530977 | orchestrator | ok: [testbed-node-0] =>  2026-01-09 00:31:07.530986 | orchestrator |  docker_version: 5:27.5.1 2026-01-09 00:31:07.530994 | orchestrator | ok: [testbed-node-1] =>  2026-01-09 00:31:07.531002 | orchestrator |  docker_version: 5:27.5.1 2026-01-09 00:31:07.531010 | orchestrator | ok: [testbed-node-2] =>  2026-01-09 00:31:07.531018 | orchestrator |  docker_version: 5:27.5.1 2026-01-09 00:31:07.531026 | orchestrator | 2026-01-09 00:31:07.531034 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-09 00:31:07.531042 | orchestrator | Friday 09 January 2026 00:31:01 +0000 (0:00:00.284) 0:05:18.459 ******** 2026-01-09 00:31:07.531050 | orchestrator | ok: [testbed-manager] =>  2026-01-09 00:31:07.531058 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-09 00:31:07.531066 | orchestrator | ok: [testbed-node-3] =>  2026-01-09 00:31:07.531078 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-09 00:31:07.531091 | orchestrator | ok: [testbed-node-4] =>  2026-01-09 00:31:07.531104 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-09 00:31:07.531117 | orchestrator | ok: [testbed-node-5] =>  2026-01-09 00:31:07.531125 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-09 00:31:07.531132 | orchestrator | ok: [testbed-node-0] =>  2026-01-09 00:31:07.531140 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-09 00:31:07.531148 | orchestrator | ok: [testbed-node-1] =>  2026-01-09 00:31:07.531166 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-09 00:31:07.531174 | orchestrator | ok: [testbed-node-2] =>  2026-01-09 00:31:07.531182 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-09 00:31:07.531190 | orchestrator | 2026-01-09 00:31:07.531198 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-09 00:31:07.531206 | orchestrator | Friday 09 January 2026 00:31:02 +0000 (0:00:00.309) 0:05:18.769 ******** 2026-01-09 00:31:07.531213 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:31:07.531221 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:31:07.531229 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:31:07.531237 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:31:07.531244 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:31:07.531252 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:31:07.531260 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:31:07.531267 | orchestrator | 2026-01-09 00:31:07.531276 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-09 00:31:07.531283 | orchestrator | Friday 09 January 2026 00:31:02 +0000 (0:00:00.272) 0:05:19.041 ******** 2026-01-09 00:31:07.531291 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:31:07.531299 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:31:07.531307 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:31:07.531314 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:31:07.531322 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:31:07.531330 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:31:07.531338 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:31:07.531345 | orchestrator | 2026-01-09 00:31:07.531353 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-09 00:31:07.531361 | orchestrator | Friday 09 January 2026 00:31:02 +0000 (0:00:00.364) 0:05:19.406 ******** 2026-01-09 00:31:07.531378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:31:07.531389 | orchestrator | 2026-01-09 00:31:07.531397 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-09 00:31:07.531404 | orchestrator | Friday 09 January 2026 00:31:03 +0000 (0:00:00.459) 0:05:19.866 ******** 2026-01-09 00:31:07.531434 | orchestrator | ok: [testbed-manager] 2026-01-09 00:31:07.531442 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:31:07.531450 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:31:07.531458 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:31:07.531465 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:31:07.531473 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:31:07.531481 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:31:07.531489 | orchestrator | 2026-01-09 00:31:07.531497 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-09 00:31:07.531505 | orchestrator | Friday 09 January 2026 00:31:04 +0000 (0:00:00.997) 0:05:20.863 ******** 2026-01-09 00:31:07.531512 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:31:07.531520 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:31:07.531528 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:31:07.531535 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:31:07.531543 | orchestrator | ok: [testbed-manager] 2026-01-09 00:31:07.531551 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:31:07.531559 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:31:07.531567 | orchestrator | 2026-01-09 00:31:07.531575 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-09 00:31:07.531592 | orchestrator | Friday 09 January 2026 00:31:07 +0000 (0:00:02.983) 0:05:23.847 ******** 2026-01-09 00:31:07.531600 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-09 00:31:07.531608 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-09 00:31:07.531616 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-09 00:31:07.531630 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-09 00:31:07.531638 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-09 00:31:07.531646 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:31:07.531654 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-09 00:31:07.531662 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-09 00:31:07.531669 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-09 00:31:07.531677 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-09 00:31:07.531685 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:31:07.531693 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-09 00:31:07.531701 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-09 00:31:07.531708 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-09 00:31:07.531716 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:31:07.531724 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-09 00:31:07.531739 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-09 00:32:11.114651 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-09 00:32:11.114777 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:11.114794 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-09 00:32:11.114807 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-09 00:32:11.114818 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:11.114829 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-09 00:32:11.114840 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:11.114851 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-09 00:32:11.114862 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-09 00:32:11.114873 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-09 00:32:11.114883 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:11.114895 | orchestrator | 2026-01-09 00:32:11.114908 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-09 00:32:11.114920 | orchestrator | Friday 09 January 2026 00:31:07 +0000 (0:00:00.613) 0:05:24.461 ******** 2026-01-09 00:32:11.114931 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:11.114942 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:11.114952 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:11.114963 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:11.114974 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:11.114984 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:11.114995 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:11.115005 | orchestrator | 2026-01-09 00:32:11.115016 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-09 00:32:11.115027 | orchestrator | Friday 09 January 2026 00:31:14 +0000 (0:00:07.017) 0:05:31.478 ******** 2026-01-09 00:32:11.115038 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:11.115049 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:11.115059 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:11.115070 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:11.115080 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:11.115091 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:11.115102 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:11.115112 | orchestrator | 2026-01-09 00:32:11.115124 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-09 00:32:11.115134 | orchestrator | Friday 09 January 2026 00:31:16 +0000 (0:00:01.257) 0:05:32.736 ******** 2026-01-09 00:32:11.115145 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:11.115156 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:11.115169 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:11.115182 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:11.115194 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:11.115236 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:11.115257 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:11.115277 | orchestrator | 2026-01-09 00:32:11.115297 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-09 00:32:11.115345 | orchestrator | Friday 09 January 2026 00:31:24 +0000 (0:00:08.927) 0:05:41.664 ******** 2026-01-09 00:32:11.115364 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:11.115379 | orchestrator | changed: [testbed-manager] 2026-01-09 00:32:11.115394 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:11.115410 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:11.115422 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:11.115432 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:11.115443 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:11.115454 | orchestrator | 2026-01-09 00:32:11.115465 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-09 00:32:11.115476 | orchestrator | Friday 09 January 2026 00:31:28 +0000 (0:00:03.431) 0:05:45.095 ******** 2026-01-09 00:32:11.115486 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:11.115497 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:11.115507 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:11.115518 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:11.115528 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:11.115539 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:11.115549 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:11.115559 | orchestrator | 2026-01-09 00:32:11.115570 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-09 00:32:11.115581 | orchestrator | Friday 09 January 2026 00:31:29 +0000 (0:00:01.384) 0:05:46.480 ******** 2026-01-09 00:32:11.115591 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:11.115602 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:11.115612 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:11.115623 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:11.115633 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:11.115643 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:11.115668 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:11.115679 | orchestrator | 2026-01-09 00:32:11.115690 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-09 00:32:11.115701 | orchestrator | Friday 09 January 2026 00:31:31 +0000 (0:00:01.646) 0:05:48.126 ******** 2026-01-09 00:32:11.115712 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:11.115722 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:32:11.115733 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:11.115743 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:11.115754 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:11.115765 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:11.115776 | orchestrator | changed: [testbed-manager] 2026-01-09 00:32:11.115786 | orchestrator | 2026-01-09 00:32:11.115797 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-09 00:32:11.115808 | orchestrator | Friday 09 January 2026 00:31:32 +0000 (0:00:00.637) 0:05:48.764 ******** 2026-01-09 00:32:11.115818 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:11.115829 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:11.115839 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:11.115849 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:11.115860 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:11.115870 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:11.115881 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:11.115891 | orchestrator | 2026-01-09 00:32:11.115902 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-09 00:32:11.115933 | orchestrator | Friday 09 January 2026 00:31:41 +0000 (0:00:09.742) 0:05:58.507 ******** 2026-01-09 00:32:11.115944 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:11.115955 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:11.115965 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:11.115986 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:11.115997 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:11.116007 | orchestrator | changed: [testbed-manager] 2026-01-09 00:32:11.116018 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:11.116028 | orchestrator | 2026-01-09 00:32:11.116039 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-09 00:32:11.116050 | orchestrator | Friday 09 January 2026 00:31:43 +0000 (0:00:01.578) 0:06:00.085 ******** 2026-01-09 00:32:11.116061 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:11.116071 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:11.116081 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:11.116092 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:11.116103 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:11.116113 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:11.116124 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:11.116134 | orchestrator | 2026-01-09 00:32:11.116145 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-09 00:32:11.116155 | orchestrator | Friday 09 January 2026 00:31:52 +0000 (0:00:09.631) 0:06:09.717 ******** 2026-01-09 00:32:11.116166 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:11.116176 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:11.116187 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:11.116197 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:11.116208 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:11.116218 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:11.116229 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:11.116239 | orchestrator | 2026-01-09 00:32:11.116250 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-09 00:32:11.116261 | orchestrator | Friday 09 January 2026 00:32:04 +0000 (0:00:11.429) 0:06:21.147 ******** 2026-01-09 00:32:11.116271 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-09 00:32:11.116282 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-09 00:32:11.116293 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-09 00:32:11.116304 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-09 00:32:11.116314 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-09 00:32:11.116351 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-09 00:32:11.116361 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-09 00:32:11.116372 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-09 00:32:11.116382 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-09 00:32:11.116393 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-09 00:32:11.116403 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-09 00:32:11.116414 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-09 00:32:11.116425 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-09 00:32:11.116435 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-09 00:32:11.116446 | orchestrator | 2026-01-09 00:32:11.116456 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-09 00:32:11.116467 | orchestrator | Friday 09 January 2026 00:32:05 +0000 (0:00:01.213) 0:06:22.360 ******** 2026-01-09 00:32:11.116477 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:11.116488 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:11.116498 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:32:11.116509 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:11.116519 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:11.116529 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:11.116540 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:11.116550 | orchestrator | 2026-01-09 00:32:11.116561 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-09 00:32:11.116572 | orchestrator | Friday 09 January 2026 00:32:06 +0000 (0:00:00.617) 0:06:22.977 ******** 2026-01-09 00:32:11.116590 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:11.116600 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:11.116611 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:11.116621 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:11.116632 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:11.116642 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:11.116653 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:11.116663 | orchestrator | 2026-01-09 00:32:11.116674 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-09 00:32:11.116691 | orchestrator | Friday 09 January 2026 00:32:10 +0000 (0:00:03.795) 0:06:26.772 ******** 2026-01-09 00:32:11.116702 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:11.116713 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:11.116724 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:32:11.116734 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:11.116745 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:11.116755 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:11.116766 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:11.116776 | orchestrator | 2026-01-09 00:32:11.116788 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-09 00:32:11.116798 | orchestrator | Friday 09 January 2026 00:32:10 +0000 (0:00:00.541) 0:06:27.314 ******** 2026-01-09 00:32:11.116809 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-09 00:32:11.116820 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-09 00:32:11.116830 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:11.116841 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-09 00:32:11.116851 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-09 00:32:11.116862 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:11.116873 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-09 00:32:11.116883 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-09 00:32:11.116894 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:32:11.116912 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-09 00:32:31.095110 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-09 00:32:31.095247 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:31.095264 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-09 00:32:31.095276 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-09 00:32:31.095287 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:31.095400 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-09 00:32:31.095413 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-09 00:32:31.095424 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:31.095434 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-09 00:32:31.095445 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-09 00:32:31.095456 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:31.095468 | orchestrator | 2026-01-09 00:32:31.095481 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-09 00:32:31.095493 | orchestrator | Friday 09 January 2026 00:32:11 +0000 (0:00:00.814) 0:06:28.129 ******** 2026-01-09 00:32:31.095504 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:31.095515 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:31.095525 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:32:31.095536 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:31.095547 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:31.095557 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:31.095568 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:31.095579 | orchestrator | 2026-01-09 00:32:31.095590 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-09 00:32:31.095626 | orchestrator | Friday 09 January 2026 00:32:11 +0000 (0:00:00.540) 0:06:28.670 ******** 2026-01-09 00:32:31.095639 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:31.095653 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:31.095666 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:32:31.095679 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:31.095691 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:31.095704 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:31.095717 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:31.095729 | orchestrator | 2026-01-09 00:32:31.095742 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-09 00:32:31.095755 | orchestrator | Friday 09 January 2026 00:32:12 +0000 (0:00:00.527) 0:06:29.197 ******** 2026-01-09 00:32:31.095768 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:31.095779 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:31.095789 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:32:31.095800 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:31.095811 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:31.095821 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:31.095832 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:31.095842 | orchestrator | 2026-01-09 00:32:31.095853 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-09 00:32:31.095864 | orchestrator | Friday 09 January 2026 00:32:13 +0000 (0:00:00.560) 0:06:29.758 ******** 2026-01-09 00:32:31.095875 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:31.095886 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:32:31.095896 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:32:31.095907 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:32:31.095918 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:32:31.095928 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:32:31.095939 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:32:31.095949 | orchestrator | 2026-01-09 00:32:31.095960 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-09 00:32:31.095971 | orchestrator | Friday 09 January 2026 00:32:15 +0000 (0:00:02.101) 0:06:31.859 ******** 2026-01-09 00:32:31.095983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:32:31.095996 | orchestrator | 2026-01-09 00:32:31.096007 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-09 00:32:31.096018 | orchestrator | Friday 09 January 2026 00:32:16 +0000 (0:00:00.915) 0:06:32.774 ******** 2026-01-09 00:32:31.096028 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:31.096039 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:31.096050 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:31.096060 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:31.096071 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:31.096083 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:31.096093 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:31.096104 | orchestrator | 2026-01-09 00:32:31.096115 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-09 00:32:31.096126 | orchestrator | Friday 09 January 2026 00:32:16 +0000 (0:00:00.880) 0:06:33.655 ******** 2026-01-09 00:32:31.096137 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:31.096148 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:31.096159 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:31.096170 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:31.096180 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:31.096191 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:31.096201 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:31.096212 | orchestrator | 2026-01-09 00:32:31.096223 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-09 00:32:31.096245 | orchestrator | Friday 09 January 2026 00:32:17 +0000 (0:00:00.892) 0:06:34.548 ******** 2026-01-09 00:32:31.096256 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:31.096267 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:31.096278 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:31.096288 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:31.096316 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:31.096386 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:31.096398 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:31.096409 | orchestrator | 2026-01-09 00:32:31.096421 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-09 00:32:31.096452 | orchestrator | Friday 09 January 2026 00:32:19 +0000 (0:00:01.596) 0:06:36.145 ******** 2026-01-09 00:32:31.096463 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:31.096474 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:32:31.096485 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:32:31.096496 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:32:31.096506 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:32:31.096517 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:32:31.096528 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:32:31.096538 | orchestrator | 2026-01-09 00:32:31.096549 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-09 00:32:31.096560 | orchestrator | Friday 09 January 2026 00:32:20 +0000 (0:00:01.452) 0:06:37.597 ******** 2026-01-09 00:32:31.096571 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:31.096582 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:31.096592 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:31.096603 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:31.096613 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:31.096624 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:31.096634 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:31.096645 | orchestrator | 2026-01-09 00:32:31.096656 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-09 00:32:31.096666 | orchestrator | Friday 09 January 2026 00:32:22 +0000 (0:00:01.342) 0:06:38.939 ******** 2026-01-09 00:32:31.096677 | orchestrator | changed: [testbed-manager] 2026-01-09 00:32:31.096687 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:31.096698 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:31.096708 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:31.096719 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:31.096729 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:31.096740 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:31.096751 | orchestrator | 2026-01-09 00:32:31.096762 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-09 00:32:31.096772 | orchestrator | Friday 09 January 2026 00:32:23 +0000 (0:00:01.430) 0:06:40.370 ******** 2026-01-09 00:32:31.096783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:32:31.096795 | orchestrator | 2026-01-09 00:32:31.096806 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-09 00:32:31.096816 | orchestrator | Friday 09 January 2026 00:32:24 +0000 (0:00:01.054) 0:06:41.424 ******** 2026-01-09 00:32:31.096827 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:32:31.096838 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:31.096848 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:32:31.096859 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:32:31.096869 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:32:31.096880 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:32:31.096891 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:32:31.096901 | orchestrator | 2026-01-09 00:32:31.096912 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-09 00:32:31.096923 | orchestrator | Friday 09 January 2026 00:32:26 +0000 (0:00:01.411) 0:06:42.836 ******** 2026-01-09 00:32:31.096949 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:31.096960 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:32:31.096971 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:32:31.096981 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:32:31.096992 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:32:31.097002 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:32:31.097013 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:32:31.097023 | orchestrator | 2026-01-09 00:32:31.097034 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-09 00:32:31.097045 | orchestrator | Friday 09 January 2026 00:32:27 +0000 (0:00:01.213) 0:06:44.049 ******** 2026-01-09 00:32:31.097056 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:31.097066 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:32:31.097077 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:32:31.097088 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:32:31.097098 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:32:31.097109 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:32:31.097119 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:32:31.097130 | orchestrator | 2026-01-09 00:32:31.097141 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-09 00:32:31.097152 | orchestrator | Friday 09 January 2026 00:32:28 +0000 (0:00:01.227) 0:06:45.276 ******** 2026-01-09 00:32:31.097163 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:31.097173 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:32:31.097184 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:32:31.097194 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:32:31.097205 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:32:31.097215 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:32:31.097226 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:32:31.097236 | orchestrator | 2026-01-09 00:32:31.097254 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-09 00:32:31.097265 | orchestrator | Friday 09 January 2026 00:32:29 +0000 (0:00:01.297) 0:06:46.573 ******** 2026-01-09 00:32:31.097276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:32:31.097286 | orchestrator | 2026-01-09 00:32:31.097317 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-09 00:32:31.097328 | orchestrator | Friday 09 January 2026 00:32:30 +0000 (0:00:00.924) 0:06:47.498 ******** 2026-01-09 00:32:31.097339 | orchestrator | 2026-01-09 00:32:31.097350 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-09 00:32:31.097360 | orchestrator | Friday 09 January 2026 00:32:30 +0000 (0:00:00.041) 0:06:47.540 ******** 2026-01-09 00:32:31.097371 | orchestrator | 2026-01-09 00:32:31.097382 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-09 00:32:31.097392 | orchestrator | Friday 09 January 2026 00:32:30 +0000 (0:00:00.040) 0:06:47.581 ******** 2026-01-09 00:32:31.097403 | orchestrator | 2026-01-09 00:32:31.097414 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-09 00:32:31.097431 | orchestrator | Friday 09 January 2026 00:32:30 +0000 (0:00:00.048) 0:06:47.630 ******** 2026-01-09 00:32:57.814850 | orchestrator | 2026-01-09 00:32:57.814978 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-09 00:32:57.814996 | orchestrator | Friday 09 January 2026 00:32:30 +0000 (0:00:00.040) 0:06:47.671 ******** 2026-01-09 00:32:57.815009 | orchestrator | 2026-01-09 00:32:57.815020 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-09 00:32:57.815031 | orchestrator | Friday 09 January 2026 00:32:30 +0000 (0:00:00.040) 0:06:47.711 ******** 2026-01-09 00:32:57.815042 | orchestrator | 2026-01-09 00:32:57.815053 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-09 00:32:57.815064 | orchestrator | Friday 09 January 2026 00:32:31 +0000 (0:00:00.049) 0:06:47.760 ******** 2026-01-09 00:32:57.815108 | orchestrator | 2026-01-09 00:32:57.815119 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-09 00:32:57.815130 | orchestrator | Friday 09 January 2026 00:32:31 +0000 (0:00:00.041) 0:06:47.801 ******** 2026-01-09 00:32:57.815141 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:32:57.815153 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:32:57.815163 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:32:57.815174 | orchestrator | 2026-01-09 00:32:57.815185 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-09 00:32:57.815195 | orchestrator | Friday 09 January 2026 00:32:32 +0000 (0:00:01.138) 0:06:48.940 ******** 2026-01-09 00:32:57.815206 | orchestrator | changed: [testbed-manager] 2026-01-09 00:32:57.815218 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:57.815229 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:57.815239 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:57.815250 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:57.815287 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:57.815298 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:57.815309 | orchestrator | 2026-01-09 00:32:57.815320 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-09 00:32:57.815331 | orchestrator | Friday 09 January 2026 00:32:33 +0000 (0:00:01.313) 0:06:50.254 ******** 2026-01-09 00:32:57.815341 | orchestrator | changed: [testbed-manager] 2026-01-09 00:32:57.815352 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:57.815366 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:57.815379 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:57.815392 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:57.815403 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:57.815416 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:57.815429 | orchestrator | 2026-01-09 00:32:57.815441 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-09 00:32:57.815454 | orchestrator | Friday 09 January 2026 00:32:34 +0000 (0:00:01.465) 0:06:51.719 ******** 2026-01-09 00:32:57.815467 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:57.815480 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:57.815493 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:57.815505 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:57.815516 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:57.815529 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:57.815542 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:57.815554 | orchestrator | 2026-01-09 00:32:57.815567 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-09 00:32:57.815580 | orchestrator | Friday 09 January 2026 00:32:37 +0000 (0:00:02.362) 0:06:54.082 ******** 2026-01-09 00:32:57.815590 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:57.815601 | orchestrator | 2026-01-09 00:32:57.815618 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-09 00:32:57.815638 | orchestrator | Friday 09 January 2026 00:32:37 +0000 (0:00:00.124) 0:06:54.206 ******** 2026-01-09 00:32:57.815659 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:57.815679 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:57.815695 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:57.815706 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:57.815716 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:57.815727 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:32:57.815737 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:57.815749 | orchestrator | 2026-01-09 00:32:57.815760 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-09 00:32:57.815772 | orchestrator | Friday 09 January 2026 00:32:38 +0000 (0:00:01.098) 0:06:55.305 ******** 2026-01-09 00:32:57.815783 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:57.815794 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:57.815804 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:32:57.815825 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:57.815835 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:57.815846 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:57.815857 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:57.815868 | orchestrator | 2026-01-09 00:32:57.815879 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-09 00:32:57.815890 | orchestrator | Friday 09 January 2026 00:32:39 +0000 (0:00:00.527) 0:06:55.832 ******** 2026-01-09 00:32:57.815902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:32:57.815916 | orchestrator | 2026-01-09 00:32:57.815927 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-09 00:32:57.815938 | orchestrator | Friday 09 January 2026 00:32:40 +0000 (0:00:01.102) 0:06:56.935 ******** 2026-01-09 00:32:57.815948 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:57.815959 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:32:57.815970 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:32:57.815981 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:32:57.815991 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:32:57.816002 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:32:57.816013 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:32:57.816023 | orchestrator | 2026-01-09 00:32:57.816034 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-09 00:32:57.816045 | orchestrator | Friday 09 January 2026 00:32:41 +0000 (0:00:00.858) 0:06:57.793 ******** 2026-01-09 00:32:57.816056 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-09 00:32:57.816085 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-09 00:32:57.816097 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-09 00:32:57.816108 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-09 00:32:57.816119 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-09 00:32:57.816130 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-09 00:32:57.816140 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-09 00:32:57.816151 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-09 00:32:57.816162 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-09 00:32:57.816173 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-09 00:32:57.816183 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-09 00:32:57.816194 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-09 00:32:57.816205 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-09 00:32:57.816215 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-09 00:32:57.816226 | orchestrator | 2026-01-09 00:32:57.816237 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-09 00:32:57.816247 | orchestrator | Friday 09 January 2026 00:32:43 +0000 (0:00:02.607) 0:07:00.401 ******** 2026-01-09 00:32:57.816283 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:57.816302 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:57.816330 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:32:57.816350 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:57.816367 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:57.816384 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:57.816402 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:57.816418 | orchestrator | 2026-01-09 00:32:57.816433 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-09 00:32:57.816449 | orchestrator | Friday 09 January 2026 00:32:44 +0000 (0:00:00.775) 0:07:01.177 ******** 2026-01-09 00:32:57.816469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:32:57.816501 | orchestrator | 2026-01-09 00:32:57.816521 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-09 00:32:57.816541 | orchestrator | Friday 09 January 2026 00:32:45 +0000 (0:00:00.851) 0:07:02.029 ******** 2026-01-09 00:32:57.816559 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:57.816572 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:32:57.816583 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:32:57.816594 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:32:57.816604 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:32:57.816615 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:32:57.816625 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:32:57.816636 | orchestrator | 2026-01-09 00:32:57.816647 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-09 00:32:57.816657 | orchestrator | Friday 09 January 2026 00:32:46 +0000 (0:00:00.863) 0:07:02.892 ******** 2026-01-09 00:32:57.816668 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:57.816679 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:32:57.816689 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:32:57.816699 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:32:57.816710 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:32:57.816720 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:32:57.816731 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:32:57.816741 | orchestrator | 2026-01-09 00:32:57.816752 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-09 00:32:57.816763 | orchestrator | Friday 09 January 2026 00:32:47 +0000 (0:00:01.049) 0:07:03.942 ******** 2026-01-09 00:32:57.816773 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:57.816784 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:57.816794 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:32:57.816805 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:57.816815 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:57.816826 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:57.816836 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:57.816847 | orchestrator | 2026-01-09 00:32:57.816858 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-09 00:32:57.816869 | orchestrator | Friday 09 January 2026 00:32:47 +0000 (0:00:00.512) 0:07:04.455 ******** 2026-01-09 00:32:57.816879 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:57.816908 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:32:57.816920 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:32:57.816930 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:32:57.816941 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:32:57.816951 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:32:57.816962 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:32:57.816972 | orchestrator | 2026-01-09 00:32:57.816983 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-09 00:32:57.816994 | orchestrator | Friday 09 January 2026 00:32:49 +0000 (0:00:01.640) 0:07:06.096 ******** 2026-01-09 00:32:57.817005 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:32:57.817015 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:32:57.817026 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:32:57.817037 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:32:57.817047 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:32:57.817057 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:32:57.817068 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:32:57.817078 | orchestrator | 2026-01-09 00:32:57.817089 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-09 00:32:57.817100 | orchestrator | Friday 09 January 2026 00:32:49 +0000 (0:00:00.522) 0:07:06.618 ******** 2026-01-09 00:32:57.817110 | orchestrator | ok: [testbed-manager] 2026-01-09 00:32:57.817121 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:32:57.817131 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:32:57.817142 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:32:57.817161 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:32:57.817172 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:32:57.817192 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:33:32.756591 | orchestrator | 2026-01-09 00:33:32.756766 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-09 00:33:32.756796 | orchestrator | Friday 09 January 2026 00:32:57 +0000 (0:00:07.906) 0:07:14.525 ******** 2026-01-09 00:33:32.756816 | orchestrator | ok: [testbed-manager] 2026-01-09 00:33:32.756838 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:33:32.756851 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:33:32.756862 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:33:32.756873 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:33:32.756884 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:33:32.756894 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:33:32.756905 | orchestrator | 2026-01-09 00:33:32.756917 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-09 00:33:32.756928 | orchestrator | Friday 09 January 2026 00:32:59 +0000 (0:00:01.583) 0:07:16.108 ******** 2026-01-09 00:33:32.756938 | orchestrator | ok: [testbed-manager] 2026-01-09 00:33:32.756949 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:33:32.756959 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:33:32.756970 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:33:32.756980 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:33:32.756991 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:33:32.757002 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:33:32.757012 | orchestrator | 2026-01-09 00:33:32.757024 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-09 00:33:32.757047 | orchestrator | Friday 09 January 2026 00:33:01 +0000 (0:00:01.775) 0:07:17.884 ******** 2026-01-09 00:33:32.757058 | orchestrator | ok: [testbed-manager] 2026-01-09 00:33:32.757069 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:33:32.757079 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:33:32.757090 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:33:32.757101 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:33:32.757112 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:33:32.757122 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:33:32.757133 | orchestrator | 2026-01-09 00:33:32.757144 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-09 00:33:32.757155 | orchestrator | Friday 09 January 2026 00:33:02 +0000 (0:00:01.820) 0:07:19.704 ******** 2026-01-09 00:33:32.757165 | orchestrator | ok: [testbed-manager] 2026-01-09 00:33:32.757176 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:33:32.757187 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:33:32.757198 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:33:32.757208 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:33:32.757373 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:33:32.757401 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:33:32.757412 | orchestrator | 2026-01-09 00:33:32.757423 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-09 00:33:32.757434 | orchestrator | Friday 09 January 2026 00:33:03 +0000 (0:00:01.012) 0:07:20.717 ******** 2026-01-09 00:33:32.757445 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:33:32.757456 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:33:32.757467 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:33:32.757478 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:33:32.757488 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:33:32.757499 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:33:32.757510 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:33:32.757521 | orchestrator | 2026-01-09 00:33:32.757532 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-09 00:33:32.757543 | orchestrator | Friday 09 January 2026 00:33:05 +0000 (0:00:01.031) 0:07:21.748 ******** 2026-01-09 00:33:32.757555 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:33:32.757566 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:33:32.757606 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:33:32.757617 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:33:32.757628 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:33:32.757639 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:33:32.757649 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:33:32.757660 | orchestrator | 2026-01-09 00:33:32.757671 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-09 00:33:32.757695 | orchestrator | Friday 09 January 2026 00:33:05 +0000 (0:00:00.558) 0:07:22.306 ******** 2026-01-09 00:33:32.757706 | orchestrator | ok: [testbed-manager] 2026-01-09 00:33:32.757717 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:33:32.757733 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:33:32.757751 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:33:32.757778 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:33:32.757795 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:33:32.757812 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:33:32.757829 | orchestrator | 2026-01-09 00:33:32.757847 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-09 00:33:32.757865 | orchestrator | Friday 09 January 2026 00:33:06 +0000 (0:00:00.517) 0:07:22.824 ******** 2026-01-09 00:33:32.757903 | orchestrator | ok: [testbed-manager] 2026-01-09 00:33:32.757922 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:33:32.757941 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:33:32.757952 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:33:32.757962 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:33:32.757973 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:33:32.757983 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:33:32.757994 | orchestrator | 2026-01-09 00:33:32.758005 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-09 00:33:32.758072 | orchestrator | Friday 09 January 2026 00:33:06 +0000 (0:00:00.530) 0:07:23.355 ******** 2026-01-09 00:33:32.758086 | orchestrator | ok: [testbed-manager] 2026-01-09 00:33:32.758097 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:33:32.758108 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:33:32.758119 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:33:32.758129 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:33:32.758140 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:33:32.758150 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:33:32.758161 | orchestrator | 2026-01-09 00:33:32.758172 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-09 00:33:32.758182 | orchestrator | Friday 09 January 2026 00:33:07 +0000 (0:00:00.801) 0:07:24.156 ******** 2026-01-09 00:33:32.758193 | orchestrator | ok: [testbed-manager] 2026-01-09 00:33:32.758204 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:33:32.758256 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:33:32.758269 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:33:32.758279 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:33:32.758290 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:33:32.758300 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:33:32.758311 | orchestrator | 2026-01-09 00:33:32.758346 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-09 00:33:32.758358 | orchestrator | Friday 09 January 2026 00:33:13 +0000 (0:00:05.804) 0:07:29.960 ******** 2026-01-09 00:33:32.758368 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:33:32.758379 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:33:32.758390 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:33:32.758400 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:33:32.758411 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:33:32.758422 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:33:32.758432 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:33:32.758443 | orchestrator | 2026-01-09 00:33:32.758453 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-09 00:33:32.758464 | orchestrator | Friday 09 January 2026 00:33:13 +0000 (0:00:00.548) 0:07:30.509 ******** 2026-01-09 00:33:32.758477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:33:32.758502 | orchestrator | 2026-01-09 00:33:32.758514 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-09 00:33:32.758524 | orchestrator | Friday 09 January 2026 00:33:14 +0000 (0:00:01.070) 0:07:31.580 ******** 2026-01-09 00:33:32.758535 | orchestrator | ok: [testbed-manager] 2026-01-09 00:33:32.758546 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:33:32.758556 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:33:32.758567 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:33:32.758577 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:33:32.758588 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:33:32.758598 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:33:32.758609 | orchestrator | 2026-01-09 00:33:32.758619 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-09 00:33:32.758630 | orchestrator | Friday 09 January 2026 00:33:17 +0000 (0:00:02.208) 0:07:33.788 ******** 2026-01-09 00:33:32.758641 | orchestrator | ok: [testbed-manager] 2026-01-09 00:33:32.758651 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:33:32.758662 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:33:32.758672 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:33:32.758682 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:33:32.758693 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:33:32.758703 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:33:32.758714 | orchestrator | 2026-01-09 00:33:32.758724 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-09 00:33:32.758735 | orchestrator | Friday 09 January 2026 00:33:18 +0000 (0:00:01.817) 0:07:35.605 ******** 2026-01-09 00:33:32.758746 | orchestrator | ok: [testbed-manager] 2026-01-09 00:33:32.758756 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:33:32.758767 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:33:32.758777 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:33:32.758788 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:33:32.758798 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:33:32.758809 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:33:32.758819 | orchestrator | 2026-01-09 00:33:32.758830 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-09 00:33:32.758841 | orchestrator | Friday 09 January 2026 00:33:19 +0000 (0:00:00.874) 0:07:36.480 ******** 2026-01-09 00:33:32.758852 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-09 00:33:32.758865 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-09 00:33:32.758876 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-09 00:33:32.758887 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-09 00:33:32.758897 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-09 00:33:32.758908 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-09 00:33:32.758919 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-09 00:33:32.758929 | orchestrator | 2026-01-09 00:33:32.758940 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-09 00:33:32.758951 | orchestrator | Friday 09 January 2026 00:33:21 +0000 (0:00:02.021) 0:07:38.501 ******** 2026-01-09 00:33:32.758962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:33:32.758979 | orchestrator | 2026-01-09 00:33:32.758990 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-09 00:33:32.759001 | orchestrator | Friday 09 January 2026 00:33:22 +0000 (0:00:00.867) 0:07:39.369 ******** 2026-01-09 00:33:32.759011 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:33:32.759022 | orchestrator | changed: [testbed-manager] 2026-01-09 00:33:32.759033 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:33:32.759043 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:33:32.759054 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:33:32.759064 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:33:32.759075 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:33:32.759086 | orchestrator | 2026-01-09 00:33:32.759104 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-09 00:34:04.884749 | orchestrator | Friday 09 January 2026 00:33:32 +0000 (0:00:10.097) 0:07:49.466 ******** 2026-01-09 00:34:04.884874 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:34:04.884892 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:34:04.884904 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:34:04.884915 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:34:04.884926 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:34:04.884937 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:34:04.884948 | orchestrator | ok: [testbed-manager] 2026-01-09 00:34:04.884959 | orchestrator | 2026-01-09 00:34:04.884972 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-09 00:34:04.884983 | orchestrator | Friday 09 January 2026 00:33:35 +0000 (0:00:02.371) 0:07:51.837 ******** 2026-01-09 00:34:04.884994 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:34:04.885004 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:34:04.885015 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:34:04.885026 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:34:04.885037 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:34:04.885047 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:34:04.885058 | orchestrator | 2026-01-09 00:34:04.885069 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-09 00:34:04.885080 | orchestrator | Friday 09 January 2026 00:33:36 +0000 (0:00:01.325) 0:07:53.163 ******** 2026-01-09 00:34:04.885092 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:04.885107 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:04.885153 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:04.885174 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:04.885226 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:04.885246 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:04.885267 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:04.885286 | orchestrator | 2026-01-09 00:34:04.885305 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-09 00:34:04.885323 | orchestrator | 2026-01-09 00:34:04.885342 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-09 00:34:04.885360 | orchestrator | Friday 09 January 2026 00:33:37 +0000 (0:00:01.305) 0:07:54.469 ******** 2026-01-09 00:34:04.885379 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:34:04.885427 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:34:04.885448 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:34:04.885466 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:34:04.885486 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:34:04.885497 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:34:04.885508 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:34:04.885519 | orchestrator | 2026-01-09 00:34:04.885530 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-09 00:34:04.885541 | orchestrator | 2026-01-09 00:34:04.885551 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-09 00:34:04.885562 | orchestrator | Friday 09 January 2026 00:33:38 +0000 (0:00:00.852) 0:07:55.321 ******** 2026-01-09 00:34:04.885599 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:04.885611 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:04.885622 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:04.885632 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:04.885643 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:04.885654 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:04.885664 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:04.885675 | orchestrator | 2026-01-09 00:34:04.885687 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-09 00:34:04.885697 | orchestrator | Friday 09 January 2026 00:33:40 +0000 (0:00:01.422) 0:07:56.743 ******** 2026-01-09 00:34:04.885708 | orchestrator | ok: [testbed-manager] 2026-01-09 00:34:04.885719 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:34:04.885730 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:34:04.885741 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:34:04.885751 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:34:04.885762 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:34:04.885772 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:34:04.885783 | orchestrator | 2026-01-09 00:34:04.885794 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-09 00:34:04.885805 | orchestrator | Friday 09 January 2026 00:33:41 +0000 (0:00:01.395) 0:07:58.139 ******** 2026-01-09 00:34:04.885821 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:34:04.885839 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:34:04.885859 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:34:04.885877 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:34:04.885896 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:34:04.885907 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:34:04.885917 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:34:04.885928 | orchestrator | 2026-01-09 00:34:04.885939 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-09 00:34:04.885958 | orchestrator | Friday 09 January 2026 00:33:41 +0000 (0:00:00.499) 0:07:58.638 ******** 2026-01-09 00:34:04.885969 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:34:04.885982 | orchestrator | 2026-01-09 00:34:04.885993 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-09 00:34:04.886004 | orchestrator | Friday 09 January 2026 00:33:42 +0000 (0:00:01.045) 0:07:59.684 ******** 2026-01-09 00:34:04.886096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:34:04.886112 | orchestrator | 2026-01-09 00:34:04.886123 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-09 00:34:04.886134 | orchestrator | Friday 09 January 2026 00:33:43 +0000 (0:00:00.834) 0:08:00.518 ******** 2026-01-09 00:34:04.886145 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:04.886156 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:04.886166 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:04.886177 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:04.886209 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:04.886220 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:04.886231 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:04.886241 | orchestrator | 2026-01-09 00:34:04.886274 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-09 00:34:04.886286 | orchestrator | Friday 09 January 2026 00:33:52 +0000 (0:00:08.803) 0:08:09.322 ******** 2026-01-09 00:34:04.886297 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:04.886307 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:04.886318 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:04.886329 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:04.886351 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:04.886362 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:04.886373 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:04.886383 | orchestrator | 2026-01-09 00:34:04.886395 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-09 00:34:04.886406 | orchestrator | Friday 09 January 2026 00:33:53 +0000 (0:00:01.223) 0:08:10.546 ******** 2026-01-09 00:34:04.886416 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:04.886427 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:04.886438 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:04.886449 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:04.886459 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:04.886470 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:04.886481 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:04.886492 | orchestrator | 2026-01-09 00:34:04.886503 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-09 00:34:04.886513 | orchestrator | Friday 09 January 2026 00:33:55 +0000 (0:00:01.434) 0:08:11.981 ******** 2026-01-09 00:34:04.886524 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:04.886535 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:04.886545 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:04.886556 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:04.886567 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:04.886578 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:04.886588 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:04.886599 | orchestrator | 2026-01-09 00:34:04.886610 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-09 00:34:04.886621 | orchestrator | Friday 09 January 2026 00:33:57 +0000 (0:00:01.978) 0:08:13.959 ******** 2026-01-09 00:34:04.886632 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:04.886642 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:04.886653 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:04.886663 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:04.886674 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:04.886685 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:04.886695 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:04.886706 | orchestrator | 2026-01-09 00:34:04.886717 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-09 00:34:04.886728 | orchestrator | Friday 09 January 2026 00:33:58 +0000 (0:00:01.300) 0:08:15.260 ******** 2026-01-09 00:34:04.886739 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:04.886749 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:04.886760 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:04.886771 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:04.886782 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:04.886793 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:04.886803 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:04.886814 | orchestrator | 2026-01-09 00:34:04.886825 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-09 00:34:04.886836 | orchestrator | 2026-01-09 00:34:04.886847 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-09 00:34:04.886857 | orchestrator | Friday 09 January 2026 00:33:59 +0000 (0:00:01.151) 0:08:16.411 ******** 2026-01-09 00:34:04.886868 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:34:04.886879 | orchestrator | 2026-01-09 00:34:04.886890 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-09 00:34:04.886901 | orchestrator | Friday 09 January 2026 00:34:00 +0000 (0:00:00.933) 0:08:17.345 ******** 2026-01-09 00:34:04.886912 | orchestrator | ok: [testbed-manager] 2026-01-09 00:34:04.886923 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:34:04.886934 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:34:04.886951 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:34:04.886962 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:34:04.886973 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:34:04.886984 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:34:04.886994 | orchestrator | 2026-01-09 00:34:04.887005 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-09 00:34:04.887016 | orchestrator | Friday 09 January 2026 00:34:01 +0000 (0:00:01.110) 0:08:18.456 ******** 2026-01-09 00:34:04.887033 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:04.887044 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:04.887055 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:04.887066 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:04.887076 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:04.887087 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:04.887098 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:04.887108 | orchestrator | 2026-01-09 00:34:04.887119 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-09 00:34:04.887130 | orchestrator | Friday 09 January 2026 00:34:02 +0000 (0:00:01.217) 0:08:19.674 ******** 2026-01-09 00:34:04.887141 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:34:04.887152 | orchestrator | 2026-01-09 00:34:04.887163 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-09 00:34:04.887174 | orchestrator | Friday 09 January 2026 00:34:03 +0000 (0:00:01.049) 0:08:20.724 ******** 2026-01-09 00:34:04.887261 | orchestrator | ok: [testbed-manager] 2026-01-09 00:34:04.887275 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:34:04.887286 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:34:04.887297 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:34:04.887308 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:34:04.887322 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:34:04.887341 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:34:04.887357 | orchestrator | 2026-01-09 00:34:04.887384 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-09 00:34:06.548323 | orchestrator | Friday 09 January 2026 00:34:04 +0000 (0:00:00.874) 0:08:21.598 ******** 2026-01-09 00:34:06.548446 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:06.548463 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:06.548474 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:06.548484 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:06.548494 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:06.548504 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:06.548514 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:06.548525 | orchestrator | 2026-01-09 00:34:06.548536 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:34:06.548548 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-09 00:34:06.548560 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-09 00:34:06.548570 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-09 00:34:06.548580 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-09 00:34:06.548589 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-09 00:34:06.548599 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-09 00:34:06.548609 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-09 00:34:06.548643 | orchestrator | 2026-01-09 00:34:06.548654 | orchestrator | 2026-01-09 00:34:06.548663 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:34:06.548674 | orchestrator | Friday 09 January 2026 00:34:06 +0000 (0:00:01.140) 0:08:22.739 ******** 2026-01-09 00:34:06.548684 | orchestrator | =============================================================================== 2026-01-09 00:34:06.548694 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.46s 2026-01-09 00:34:06.548703 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.34s 2026-01-09 00:34:06.548713 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.25s 2026-01-09 00:34:06.548723 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.18s 2026-01-09 00:34:06.548732 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.49s 2026-01-09 00:34:06.548743 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.11s 2026-01-09 00:34:06.548753 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.43s 2026-01-09 00:34:06.548763 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.10s 2026-01-09 00:34:06.548772 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.74s 2026-01-09 00:34:06.548782 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.63s 2026-01-09 00:34:06.548792 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.46s 2026-01-09 00:34:06.548802 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.93s 2026-01-09 00:34:06.548814 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.87s 2026-01-09 00:34:06.548826 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.80s 2026-01-09 00:34:06.548837 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.21s 2026-01-09 00:34:06.548862 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.91s 2026-01-09 00:34:06.548875 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.02s 2026-01-09 00:34:06.548887 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.99s 2026-01-09 00:34:06.548899 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.80s 2026-01-09 00:34:06.548910 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.54s 2026-01-09 00:34:06.877494 | orchestrator | + osism apply fail2ban 2026-01-09 00:34:19.918240 | orchestrator | 2026-01-09 00:34:19 | INFO  | Task 44ab7c03-d7e9-4d29-9433-ef66a6de18e9 (fail2ban) was prepared for execution. 2026-01-09 00:34:19.918392 | orchestrator | 2026-01-09 00:34:19 | INFO  | It takes a moment until task 44ab7c03-d7e9-4d29-9433-ef66a6de18e9 (fail2ban) has been started and output is visible here. 2026-01-09 00:34:42.675328 | orchestrator | 2026-01-09 00:34:42.675487 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-09 00:34:42.675506 | orchestrator | 2026-01-09 00:34:42.675516 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-09 00:34:42.675527 | orchestrator | Friday 09 January 2026 00:34:24 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-01-09 00:34:42.675540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:34:42.675551 | orchestrator | 2026-01-09 00:34:42.675561 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-09 00:34:42.675571 | orchestrator | Friday 09 January 2026 00:34:25 +0000 (0:00:01.177) 0:00:01.459 ******** 2026-01-09 00:34:42.675608 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:42.675620 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:42.675629 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:42.675639 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:42.675651 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:42.675661 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:42.675672 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:42.675683 | orchestrator | 2026-01-09 00:34:42.675694 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-09 00:34:42.675705 | orchestrator | Friday 09 January 2026 00:34:37 +0000 (0:00:11.493) 0:00:12.953 ******** 2026-01-09 00:34:42.675716 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:42.675727 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:42.675738 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:42.675750 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:42.675760 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:42.675772 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:42.675783 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:42.675794 | orchestrator | 2026-01-09 00:34:42.675805 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-09 00:34:42.675816 | orchestrator | Friday 09 January 2026 00:34:38 +0000 (0:00:01.567) 0:00:14.520 ******** 2026-01-09 00:34:42.675828 | orchestrator | ok: [testbed-manager] 2026-01-09 00:34:42.675840 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:34:42.675851 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:34:42.675862 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:34:42.675873 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:34:42.675883 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:34:42.675894 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:34:42.675904 | orchestrator | 2026-01-09 00:34:42.675916 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-09 00:34:42.675928 | orchestrator | Friday 09 January 2026 00:34:40 +0000 (0:00:01.492) 0:00:16.013 ******** 2026-01-09 00:34:42.675938 | orchestrator | changed: [testbed-manager] 2026-01-09 00:34:42.675949 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:34:42.675960 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:34:42.675971 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:34:42.675982 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:34:42.675993 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:34:42.676005 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:34:42.676014 | orchestrator | 2026-01-09 00:34:42.676024 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:34:42.676034 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:34:42.676045 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:34:42.676055 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:34:42.676065 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:34:42.676081 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:34:42.676096 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:34:42.676112 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:34:42.676128 | orchestrator | 2026-01-09 00:34:42.676142 | orchestrator | 2026-01-09 00:34:42.676183 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:34:42.676233 | orchestrator | Friday 09 January 2026 00:34:42 +0000 (0:00:01.871) 0:00:17.885 ******** 2026-01-09 00:34:42.676253 | orchestrator | =============================================================================== 2026-01-09 00:34:42.676263 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.49s 2026-01-09 00:34:42.676272 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.87s 2026-01-09 00:34:42.676281 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.57s 2026-01-09 00:34:42.676291 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.49s 2026-01-09 00:34:42.676300 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.18s 2026-01-09 00:34:42.992891 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-09 00:34:42.993022 | orchestrator | + osism apply network 2026-01-09 00:34:55.152735 | orchestrator | 2026-01-09 00:34:55 | INFO  | Task 2c5e2117-f688-45eb-b3c5-1ebd27d9f0e3 (network) was prepared for execution. 2026-01-09 00:34:55.152846 | orchestrator | 2026-01-09 00:34:55 | INFO  | It takes a moment until task 2c5e2117-f688-45eb-b3c5-1ebd27d9f0e3 (network) has been started and output is visible here. 2026-01-09 00:35:25.687656 | orchestrator | 2026-01-09 00:35:25.687761 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-09 00:35:25.687773 | orchestrator | 2026-01-09 00:35:25.687781 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-09 00:35:25.687789 | orchestrator | Friday 09 January 2026 00:34:59 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-01-09 00:35:25.687797 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:25.687805 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:35:25.687813 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:35:25.687821 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:35:25.687828 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:35:25.687835 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:35:25.687843 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:35:25.687850 | orchestrator | 2026-01-09 00:35:25.687857 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-09 00:35:25.687864 | orchestrator | Friday 09 January 2026 00:35:00 +0000 (0:00:00.773) 0:00:01.040 ******** 2026-01-09 00:35:25.687874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:35:25.687884 | orchestrator | 2026-01-09 00:35:25.687891 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-09 00:35:25.687898 | orchestrator | Friday 09 January 2026 00:35:01 +0000 (0:00:01.275) 0:00:02.316 ******** 2026-01-09 00:35:25.687905 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:35:25.687913 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:35:25.687920 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:35:25.687927 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:25.687934 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:35:25.687941 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:35:25.687948 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:35:25.687955 | orchestrator | 2026-01-09 00:35:25.687962 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-09 00:35:25.687970 | orchestrator | Friday 09 January 2026 00:35:03 +0000 (0:00:02.134) 0:00:04.450 ******** 2026-01-09 00:35:25.687977 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:35:25.687984 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:35:25.687991 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:25.687998 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:35:25.688005 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:35:25.688013 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:35:25.688020 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:35:25.688027 | orchestrator | 2026-01-09 00:35:25.688034 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-09 00:35:25.688063 | orchestrator | Friday 09 January 2026 00:35:05 +0000 (0:00:01.858) 0:00:06.309 ******** 2026-01-09 00:35:25.688071 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-09 00:35:25.688078 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-09 00:35:25.688086 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-09 00:35:25.688093 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-09 00:35:25.688127 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-09 00:35:25.688134 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-09 00:35:25.688141 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-09 00:35:25.688149 | orchestrator | 2026-01-09 00:35:25.688156 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-09 00:35:25.688163 | orchestrator | Friday 09 January 2026 00:35:06 +0000 (0:00:01.079) 0:00:07.389 ******** 2026-01-09 00:35:25.688170 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 00:35:25.688179 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 00:35:25.688186 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-09 00:35:25.688193 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-09 00:35:25.688200 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-09 00:35:25.688207 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-09 00:35:25.688214 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-09 00:35:25.688221 | orchestrator | 2026-01-09 00:35:25.688228 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-09 00:35:25.688235 | orchestrator | Friday 09 January 2026 00:35:10 +0000 (0:00:03.836) 0:00:11.225 ******** 2026-01-09 00:35:25.688242 | orchestrator | changed: [testbed-manager] 2026-01-09 00:35:25.688250 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:35:25.688257 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:35:25.688264 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:35:25.688271 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:35:25.688278 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:35:25.688285 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:35:25.688293 | orchestrator | 2026-01-09 00:35:25.688312 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-09 00:35:25.688319 | orchestrator | Friday 09 January 2026 00:35:12 +0000 (0:00:01.694) 0:00:12.919 ******** 2026-01-09 00:35:25.688326 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 00:35:25.688333 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 00:35:25.688340 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-09 00:35:25.688347 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-09 00:35:25.688354 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-09 00:35:25.688361 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-09 00:35:25.688368 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-09 00:35:25.688375 | orchestrator | 2026-01-09 00:35:25.688382 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-09 00:35:25.688390 | orchestrator | Friday 09 January 2026 00:35:14 +0000 (0:00:01.772) 0:00:14.692 ******** 2026-01-09 00:35:25.688397 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:25.688404 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:35:25.688411 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:35:25.688418 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:35:25.688425 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:35:25.688432 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:35:25.688439 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:35:25.688446 | orchestrator | 2026-01-09 00:35:25.688453 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-09 00:35:25.688474 | orchestrator | Friday 09 January 2026 00:35:15 +0000 (0:00:01.323) 0:00:16.015 ******** 2026-01-09 00:35:25.688482 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:35:25.688489 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:35:25.688496 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:35:25.688509 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:35:25.688517 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:35:25.688524 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:35:25.688531 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:35:25.688538 | orchestrator | 2026-01-09 00:35:25.688556 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-09 00:35:25.688564 | orchestrator | Friday 09 January 2026 00:35:16 +0000 (0:00:00.732) 0:00:16.748 ******** 2026-01-09 00:35:25.688571 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:35:25.688578 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:35:25.688585 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:25.688592 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:35:25.688599 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:35:25.688606 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:35:25.688613 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:35:25.688620 | orchestrator | 2026-01-09 00:35:25.688627 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-09 00:35:25.688634 | orchestrator | Friday 09 January 2026 00:35:18 +0000 (0:00:02.356) 0:00:19.104 ******** 2026-01-09 00:35:25.688642 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:35:25.688649 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:35:25.688656 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:35:25.688663 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:35:25.688670 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:35:25.688677 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:35:25.688684 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-09 00:35:25.688693 | orchestrator | 2026-01-09 00:35:25.688700 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-09 00:35:25.688707 | orchestrator | Friday 09 January 2026 00:35:19 +0000 (0:00:00.918) 0:00:20.023 ******** 2026-01-09 00:35:25.688714 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:25.688721 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:35:25.688728 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:35:25.688735 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:35:25.688742 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:35:25.688749 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:35:25.688756 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:35:25.688763 | orchestrator | 2026-01-09 00:35:25.688770 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-09 00:35:25.688777 | orchestrator | Friday 09 January 2026 00:35:21 +0000 (0:00:01.776) 0:00:21.800 ******** 2026-01-09 00:35:25.688784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:35:25.688793 | orchestrator | 2026-01-09 00:35:25.688801 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-09 00:35:25.688808 | orchestrator | Friday 09 January 2026 00:35:22 +0000 (0:00:01.291) 0:00:23.092 ******** 2026-01-09 00:35:25.688815 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:35:25.688822 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:25.688829 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:35:25.688836 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:35:25.688843 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:35:25.688850 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:35:25.688857 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:35:25.688864 | orchestrator | 2026-01-09 00:35:25.688871 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-09 00:35:25.688878 | orchestrator | Friday 09 January 2026 00:35:23 +0000 (0:00:01.181) 0:00:24.273 ******** 2026-01-09 00:35:25.688885 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:25.688892 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:35:25.688899 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:35:25.688911 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:35:25.688918 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:35:25.688925 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:35:25.688932 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:35:25.688939 | orchestrator | 2026-01-09 00:35:25.688946 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-09 00:35:25.688953 | orchestrator | Friday 09 January 2026 00:35:24 +0000 (0:00:00.682) 0:00:24.956 ******** 2026-01-09 00:35:25.688960 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-09 00:35:25.688967 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-09 00:35:25.688975 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-09 00:35:25.688982 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-09 00:35:25.688989 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-09 00:35:25.688997 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-09 00:35:25.689004 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-09 00:35:25.689011 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-09 00:35:25.689018 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-09 00:35:25.689025 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-09 00:35:25.689032 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-09 00:35:25.689039 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-09 00:35:25.689046 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-09 00:35:25.689053 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-09 00:35:25.689060 | orchestrator | 2026-01-09 00:35:25.689072 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-09 00:35:43.809693 | orchestrator | Friday 09 January 2026 00:35:25 +0000 (0:00:01.339) 0:00:26.295 ******** 2026-01-09 00:35:43.809813 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:35:43.809827 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:35:43.809837 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:35:43.809846 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:35:43.809855 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:35:43.809864 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:35:43.809874 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:35:43.809883 | orchestrator | 2026-01-09 00:35:43.809893 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-09 00:35:43.809902 | orchestrator | Friday 09 January 2026 00:35:26 +0000 (0:00:00.691) 0:00:26.987 ******** 2026-01-09 00:35:43.809932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-01-09 00:35:43.809944 | orchestrator | 2026-01-09 00:35:43.809953 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-09 00:35:43.809962 | orchestrator | Friday 09 January 2026 00:35:31 +0000 (0:00:04.835) 0:00:31.822 ******** 2026-01-09 00:35:43.809973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.809982 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810072 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810229 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810239 | orchestrator | 2026-01-09 00:35:43.810249 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-09 00:35:43.810260 | orchestrator | Friday 09 January 2026 00:35:37 +0000 (0:00:06.073) 0:00:37.896 ******** 2026-01-09 00:35:43.810269 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810339 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810359 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-09 00:35:43.810374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:43.810411 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:57.718319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-09 00:35:57.718444 | orchestrator | 2026-01-09 00:35:57.718462 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-09 00:35:57.718476 | orchestrator | Friday 09 January 2026 00:35:43 +0000 (0:00:06.516) 0:00:44.412 ******** 2026-01-09 00:35:57.718516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:35:57.718529 | orchestrator | 2026-01-09 00:35:57.718540 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-09 00:35:57.718552 | orchestrator | Friday 09 January 2026 00:35:45 +0000 (0:00:01.388) 0:00:45.801 ******** 2026-01-09 00:35:57.718562 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:57.718575 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:35:57.718585 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:35:57.718596 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:35:57.718607 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:35:57.718618 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:35:57.718629 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:35:57.718640 | orchestrator | 2026-01-09 00:35:57.718651 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-09 00:35:57.718662 | orchestrator | Friday 09 January 2026 00:35:46 +0000 (0:00:01.293) 0:00:47.094 ******** 2026-01-09 00:35:57.718673 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-09 00:35:57.718685 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-09 00:35:57.718696 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-09 00:35:57.718707 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-09 00:35:57.718718 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-09 00:35:57.718729 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-09 00:35:57.718740 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-09 00:35:57.718750 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-09 00:35:57.718761 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:35:57.718773 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-09 00:35:57.718783 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-09 00:35:57.718794 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-09 00:35:57.718805 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-09 00:35:57.718815 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:35:57.718827 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-09 00:35:57.718839 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-09 00:35:57.718852 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-09 00:35:57.718864 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-09 00:35:57.718877 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:35:57.718890 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-09 00:35:57.718902 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-09 00:35:57.718929 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-09 00:35:57.718942 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-09 00:35:57.718955 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:35:57.718967 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-09 00:35:57.718981 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-09 00:35:57.719004 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-09 00:35:57.719016 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-09 00:35:57.719030 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:35:57.719042 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:35:57.719055 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-09 00:35:57.719090 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-09 00:35:57.719103 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-09 00:35:57.719116 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-09 00:35:57.719129 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:35:57.719141 | orchestrator | 2026-01-09 00:35:57.719153 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-09 00:35:57.719183 | orchestrator | Friday 09 January 2026 00:35:47 +0000 (0:00:01.029) 0:00:48.124 ******** 2026-01-09 00:35:57.719196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:35:57.719207 | orchestrator | 2026-01-09 00:35:57.719218 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-09 00:35:57.719229 | orchestrator | Friday 09 January 2026 00:35:48 +0000 (0:00:01.288) 0:00:49.413 ******** 2026-01-09 00:35:57.719240 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:35:57.719250 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:35:57.719261 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:35:57.719272 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:35:57.719283 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:35:57.719294 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:35:57.719304 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:35:57.719315 | orchestrator | 2026-01-09 00:35:57.719326 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-09 00:35:57.719337 | orchestrator | Friday 09 January 2026 00:35:49 +0000 (0:00:00.664) 0:00:50.077 ******** 2026-01-09 00:35:57.719348 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:35:57.719358 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:35:57.719369 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:35:57.719380 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:35:57.719391 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:35:57.719401 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:35:57.719412 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:35:57.719423 | orchestrator | 2026-01-09 00:35:57.719434 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-09 00:35:57.719445 | orchestrator | Friday 09 January 2026 00:35:50 +0000 (0:00:00.837) 0:00:50.915 ******** 2026-01-09 00:35:57.719456 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:35:57.719467 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:35:57.719478 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:35:57.719489 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:35:57.719499 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:35:57.719510 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:35:57.719521 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:35:57.719532 | orchestrator | 2026-01-09 00:35:57.719542 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-09 00:35:57.719553 | orchestrator | Friday 09 January 2026 00:35:50 +0000 (0:00:00.664) 0:00:51.580 ******** 2026-01-09 00:35:57.719564 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:35:57.719575 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:35:57.719586 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:35:57.719597 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:35:57.719607 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:57.719626 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:35:57.719637 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:35:57.719648 | orchestrator | 2026-01-09 00:35:57.719659 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-09 00:35:57.719670 | orchestrator | Friday 09 January 2026 00:35:52 +0000 (0:00:01.898) 0:00:53.478 ******** 2026-01-09 00:35:57.719681 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:57.719691 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:35:57.719702 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:35:57.719713 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:35:57.719723 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:35:57.719734 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:35:57.719744 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:35:57.719755 | orchestrator | 2026-01-09 00:35:57.719766 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-09 00:35:57.719777 | orchestrator | Friday 09 January 2026 00:35:53 +0000 (0:00:01.026) 0:00:54.505 ******** 2026-01-09 00:35:57.719788 | orchestrator | ok: [testbed-manager] 2026-01-09 00:35:57.719799 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:35:57.719809 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:35:57.719820 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:35:57.719831 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:35:57.719841 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:35:57.719852 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:35:57.719863 | orchestrator | 2026-01-09 00:35:57.719873 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-09 00:35:57.719884 | orchestrator | Friday 09 January 2026 00:35:56 +0000 (0:00:02.395) 0:00:56.900 ******** 2026-01-09 00:35:57.719895 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:35:57.719906 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:35:57.719922 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:35:57.719934 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:35:57.719944 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:35:57.719955 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:35:57.719966 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:35:57.719977 | orchestrator | 2026-01-09 00:35:57.719988 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-09 00:35:57.719998 | orchestrator | Friday 09 January 2026 00:35:57 +0000 (0:00:00.860) 0:00:57.761 ******** 2026-01-09 00:35:57.720010 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:35:57.720020 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:35:57.720031 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:35:57.720042 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:35:57.720053 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:35:57.720092 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:35:57.720103 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:35:57.720114 | orchestrator | 2026-01-09 00:35:57.720125 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:35:57.720137 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-09 00:35:57.720150 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-09 00:35:57.720168 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-09 00:35:58.173904 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-09 00:35:58.173987 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-09 00:35:58.173995 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-09 00:35:58.174051 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-09 00:35:58.174083 | orchestrator | 2026-01-09 00:35:58.174090 | orchestrator | 2026-01-09 00:35:58.174096 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:35:58.174102 | orchestrator | Friday 09 January 2026 00:35:57 +0000 (0:00:00.570) 0:00:58.331 ******** 2026-01-09 00:35:58.174107 | orchestrator | =============================================================================== 2026-01-09 00:35:58.174112 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.52s 2026-01-09 00:35:58.174117 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.07s 2026-01-09 00:35:58.174122 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.84s 2026-01-09 00:35:58.174127 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.84s 2026-01-09 00:35:58.174132 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.40s 2026-01-09 00:35:58.174137 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.36s 2026-01-09 00:35:58.174141 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.13s 2026-01-09 00:35:58.174146 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.90s 2026-01-09 00:35:58.174151 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.86s 2026-01-09 00:35:58.174156 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.78s 2026-01-09 00:35:58.174160 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.77s 2026-01-09 00:35:58.174165 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.69s 2026-01-09 00:35:58.174170 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.39s 2026-01-09 00:35:58.174175 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.34s 2026-01-09 00:35:58.174179 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.32s 2026-01-09 00:35:58.174184 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.29s 2026-01-09 00:35:58.174189 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2026-01-09 00:35:58.174193 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.29s 2026-01-09 00:35:58.174198 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.28s 2026-01-09 00:35:58.174203 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2026-01-09 00:35:58.514409 | orchestrator | + osism apply wireguard 2026-01-09 00:36:10.646589 | orchestrator | 2026-01-09 00:36:10 | INFO  | Task 0cf35a3f-2e75-4582-a274-a74516e8436d (wireguard) was prepared for execution. 2026-01-09 00:36:10.646694 | orchestrator | 2026-01-09 00:36:10 | INFO  | It takes a moment until task 0cf35a3f-2e75-4582-a274-a74516e8436d (wireguard) has been started and output is visible here. 2026-01-09 00:36:32.010481 | orchestrator | 2026-01-09 00:36:32.010617 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-09 00:36:32.010637 | orchestrator | 2026-01-09 00:36:32.010667 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-09 00:36:32.010681 | orchestrator | Friday 09 January 2026 00:36:15 +0000 (0:00:00.224) 0:00:00.224 ******** 2026-01-09 00:36:32.010692 | orchestrator | ok: [testbed-manager] 2026-01-09 00:36:32.010704 | orchestrator | 2026-01-09 00:36:32.010721 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-09 00:36:32.010732 | orchestrator | Friday 09 January 2026 00:36:16 +0000 (0:00:01.682) 0:00:01.906 ******** 2026-01-09 00:36:32.010744 | orchestrator | changed: [testbed-manager] 2026-01-09 00:36:32.010781 | orchestrator | 2026-01-09 00:36:32.010792 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-09 00:36:32.010803 | orchestrator | Friday 09 January 2026 00:36:23 +0000 (0:00:07.227) 0:00:09.134 ******** 2026-01-09 00:36:32.010814 | orchestrator | changed: [testbed-manager] 2026-01-09 00:36:32.010824 | orchestrator | 2026-01-09 00:36:32.010835 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-09 00:36:32.010846 | orchestrator | Friday 09 January 2026 00:36:24 +0000 (0:00:00.563) 0:00:09.697 ******** 2026-01-09 00:36:32.010857 | orchestrator | changed: [testbed-manager] 2026-01-09 00:36:32.010867 | orchestrator | 2026-01-09 00:36:32.010878 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-09 00:36:32.010889 | orchestrator | Friday 09 January 2026 00:36:24 +0000 (0:00:00.428) 0:00:10.126 ******** 2026-01-09 00:36:32.010899 | orchestrator | ok: [testbed-manager] 2026-01-09 00:36:32.010910 | orchestrator | 2026-01-09 00:36:32.010921 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-09 00:36:32.010931 | orchestrator | Friday 09 January 2026 00:36:25 +0000 (0:00:00.781) 0:00:10.907 ******** 2026-01-09 00:36:32.010942 | orchestrator | ok: [testbed-manager] 2026-01-09 00:36:32.010952 | orchestrator | 2026-01-09 00:36:32.010963 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-09 00:36:32.010974 | orchestrator | Friday 09 January 2026 00:36:26 +0000 (0:00:00.457) 0:00:11.364 ******** 2026-01-09 00:36:32.010984 | orchestrator | ok: [testbed-manager] 2026-01-09 00:36:32.010995 | orchestrator | 2026-01-09 00:36:32.011008 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-09 00:36:32.011068 | orchestrator | Friday 09 January 2026 00:36:26 +0000 (0:00:00.433) 0:00:11.798 ******** 2026-01-09 00:36:32.011081 | orchestrator | changed: [testbed-manager] 2026-01-09 00:36:32.011094 | orchestrator | 2026-01-09 00:36:32.011107 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-09 00:36:32.011120 | orchestrator | Friday 09 January 2026 00:36:27 +0000 (0:00:01.222) 0:00:13.021 ******** 2026-01-09 00:36:32.011133 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-09 00:36:32.011147 | orchestrator | changed: [testbed-manager] 2026-01-09 00:36:32.011159 | orchestrator | 2026-01-09 00:36:32.011171 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-09 00:36:32.011183 | orchestrator | Friday 09 January 2026 00:36:28 +0000 (0:00:00.965) 0:00:13.986 ******** 2026-01-09 00:36:32.011193 | orchestrator | changed: [testbed-manager] 2026-01-09 00:36:32.011204 | orchestrator | 2026-01-09 00:36:32.011215 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-09 00:36:32.011226 | orchestrator | Friday 09 January 2026 00:36:30 +0000 (0:00:01.776) 0:00:15.763 ******** 2026-01-09 00:36:32.011236 | orchestrator | changed: [testbed-manager] 2026-01-09 00:36:32.011247 | orchestrator | 2026-01-09 00:36:32.011258 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:36:32.011269 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:36:32.011281 | orchestrator | 2026-01-09 00:36:32.011292 | orchestrator | 2026-01-09 00:36:32.011303 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:36:32.011314 | orchestrator | Friday 09 January 2026 00:36:31 +0000 (0:00:01.022) 0:00:16.785 ******** 2026-01-09 00:36:32.011325 | orchestrator | =============================================================================== 2026-01-09 00:36:32.011336 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.23s 2026-01-09 00:36:32.011346 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.78s 2026-01-09 00:36:32.011357 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.68s 2026-01-09 00:36:32.011368 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2026-01-09 00:36:32.011388 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.02s 2026-01-09 00:36:32.011399 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2026-01-09 00:36:32.011410 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.78s 2026-01-09 00:36:32.011420 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-01-09 00:36:32.011431 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2026-01-09 00:36:32.011442 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-01-09 00:36:32.011453 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-01-09 00:36:32.369073 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-09 00:36:32.411439 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-09 00:36:32.411551 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-09 00:36:32.489545 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 178 0 --:--:-- --:--:-- --:--:-- 179 2026-01-09 00:36:32.505649 | orchestrator | + osism apply --environment custom workarounds 2026-01-09 00:36:34.529110 | orchestrator | 2026-01-09 00:36:34 | INFO  | Trying to run play workarounds in environment custom 2026-01-09 00:36:44.701494 | orchestrator | 2026-01-09 00:36:44 | INFO  | Task e9b7ff38-1074-4dce-80a0-e405f0b0504f (workarounds) was prepared for execution. 2026-01-09 00:36:44.701599 | orchestrator | 2026-01-09 00:36:44 | INFO  | It takes a moment until task e9b7ff38-1074-4dce-80a0-e405f0b0504f (workarounds) has been started and output is visible here. 2026-01-09 00:37:10.325619 | orchestrator | 2026-01-09 00:37:10.325740 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 00:37:10.325758 | orchestrator | 2026-01-09 00:37:10.325769 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-09 00:37:10.325781 | orchestrator | Friday 09 January 2026 00:36:48 +0000 (0:00:00.118) 0:00:00.118 ******** 2026-01-09 00:37:10.325792 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-09 00:37:10.325803 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-09 00:37:10.325814 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-09 00:37:10.325825 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-09 00:37:10.325835 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-09 00:37:10.325846 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-09 00:37:10.325857 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-09 00:37:10.325867 | orchestrator | 2026-01-09 00:37:10.325878 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-09 00:37:10.325889 | orchestrator | 2026-01-09 00:37:10.325900 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-09 00:37:10.325910 | orchestrator | Friday 09 January 2026 00:36:49 +0000 (0:00:00.856) 0:00:00.974 ******** 2026-01-09 00:37:10.325921 | orchestrator | ok: [testbed-manager] 2026-01-09 00:37:10.325934 | orchestrator | 2026-01-09 00:37:10.325945 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-09 00:37:10.325956 | orchestrator | 2026-01-09 00:37:10.325993 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-09 00:37:10.326005 | orchestrator | Friday 09 January 2026 00:36:51 +0000 (0:00:02.574) 0:00:03.549 ******** 2026-01-09 00:37:10.326073 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:37:10.326087 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:37:10.326097 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:37:10.326108 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:37:10.326119 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:37:10.326149 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:37:10.326162 | orchestrator | 2026-01-09 00:37:10.326175 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-09 00:37:10.326187 | orchestrator | 2026-01-09 00:37:10.326199 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-09 00:37:10.326212 | orchestrator | Friday 09 January 2026 00:36:53 +0000 (0:00:01.912) 0:00:05.461 ******** 2026-01-09 00:37:10.326225 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-09 00:37:10.326239 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-09 00:37:10.326252 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-09 00:37:10.326263 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-09 00:37:10.326276 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-09 00:37:10.326289 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-09 00:37:10.326300 | orchestrator | 2026-01-09 00:37:10.326313 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-09 00:37:10.326325 | orchestrator | Friday 09 January 2026 00:36:55 +0000 (0:00:01.562) 0:00:07.024 ******** 2026-01-09 00:37:10.326337 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:37:10.326351 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:37:10.326363 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:37:10.326375 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:37:10.326387 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:37:10.326399 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:37:10.326412 | orchestrator | 2026-01-09 00:37:10.326425 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-09 00:37:10.326445 | orchestrator | Friday 09 January 2026 00:36:59 +0000 (0:00:03.993) 0:00:11.017 ******** 2026-01-09 00:37:10.326465 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:37:10.326482 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:37:10.326511 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:37:10.326532 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:37:10.326552 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:37:10.326571 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:37:10.326589 | orchestrator | 2026-01-09 00:37:10.326609 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-09 00:37:10.326628 | orchestrator | 2026-01-09 00:37:10.326646 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-09 00:37:10.326667 | orchestrator | Friday 09 January 2026 00:37:00 +0000 (0:00:00.721) 0:00:11.739 ******** 2026-01-09 00:37:10.326686 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:37:10.326703 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:37:10.326721 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:37:10.326739 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:37:10.326757 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:37:10.326777 | orchestrator | changed: [testbed-manager] 2026-01-09 00:37:10.326797 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:37:10.326817 | orchestrator | 2026-01-09 00:37:10.326840 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-09 00:37:10.326860 | orchestrator | Friday 09 January 2026 00:37:01 +0000 (0:00:01.593) 0:00:13.332 ******** 2026-01-09 00:37:10.326880 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:37:10.326903 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:37:10.326925 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:37:10.326945 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:37:10.326987 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:37:10.327008 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:37:10.327071 | orchestrator | changed: [testbed-manager] 2026-01-09 00:37:10.327091 | orchestrator | 2026-01-09 00:37:10.327126 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-09 00:37:10.327148 | orchestrator | Friday 09 January 2026 00:37:03 +0000 (0:00:01.623) 0:00:14.956 ******** 2026-01-09 00:37:10.327165 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:37:10.327182 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:37:10.327200 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:37:10.327219 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:37:10.327238 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:37:10.327256 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:37:10.327274 | orchestrator | ok: [testbed-manager] 2026-01-09 00:37:10.327291 | orchestrator | 2026-01-09 00:37:10.327311 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-09 00:37:10.327328 | orchestrator | Friday 09 January 2026 00:37:04 +0000 (0:00:01.574) 0:00:16.531 ******** 2026-01-09 00:37:10.327344 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:37:10.327362 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:37:10.327379 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:37:10.327397 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:37:10.327415 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:37:10.327432 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:37:10.327451 | orchestrator | changed: [testbed-manager] 2026-01-09 00:37:10.327468 | orchestrator | 2026-01-09 00:37:10.327489 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-09 00:37:10.327507 | orchestrator | Friday 09 January 2026 00:37:06 +0000 (0:00:01.861) 0:00:18.392 ******** 2026-01-09 00:37:10.327525 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:37:10.327543 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:37:10.327561 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:37:10.327580 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:37:10.327599 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:37:10.327618 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:37:10.327638 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:37:10.327657 | orchestrator | 2026-01-09 00:37:10.327677 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-09 00:37:10.327698 | orchestrator | 2026-01-09 00:37:10.327717 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-09 00:37:10.327736 | orchestrator | Friday 09 January 2026 00:37:07 +0000 (0:00:00.645) 0:00:19.038 ******** 2026-01-09 00:37:10.327754 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:37:10.327771 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:37:10.327790 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:37:10.327808 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:37:10.327825 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:37:10.327844 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:37:10.327864 | orchestrator | ok: [testbed-manager] 2026-01-09 00:37:10.327883 | orchestrator | 2026-01-09 00:37:10.327902 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:37:10.327922 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:37:10.327943 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:10.327961 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:10.328088 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:10.328109 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:10.328152 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:10.328171 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:10.328191 | orchestrator | 2026-01-09 00:37:10.328211 | orchestrator | 2026-01-09 00:37:10.328230 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:37:10.328252 | orchestrator | Friday 09 January 2026 00:37:10 +0000 (0:00:02.815) 0:00:21.854 ******** 2026-01-09 00:37:10.328272 | orchestrator | =============================================================================== 2026-01-09 00:37:10.328292 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.99s 2026-01-09 00:37:10.328313 | orchestrator | Install python3-docker -------------------------------------------------- 2.82s 2026-01-09 00:37:10.328333 | orchestrator | Apply netplan configuration --------------------------------------------- 2.57s 2026-01-09 00:37:10.328352 | orchestrator | Apply netplan configuration --------------------------------------------- 1.91s 2026-01-09 00:37:10.328372 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.86s 2026-01-09 00:37:10.328392 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.62s 2026-01-09 00:37:10.328412 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.59s 2026-01-09 00:37:10.328441 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.57s 2026-01-09 00:37:10.328459 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.56s 2026-01-09 00:37:10.328477 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.86s 2026-01-09 00:37:10.328495 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.72s 2026-01-09 00:37:10.328533 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2026-01-09 00:37:11.018827 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-09 00:37:23.236865 | orchestrator | 2026-01-09 00:37:23 | INFO  | Task 873f1526-65ca-437b-8856-6cffa972644b (reboot) was prepared for execution. 2026-01-09 00:37:23.237017 | orchestrator | 2026-01-09 00:37:23 | INFO  | It takes a moment until task 873f1526-65ca-437b-8856-6cffa972644b (reboot) has been started and output is visible here. 2026-01-09 00:37:33.643855 | orchestrator | 2026-01-09 00:37:33.644060 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-09 00:37:33.644093 | orchestrator | 2026-01-09 00:37:33.644114 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-09 00:37:33.644135 | orchestrator | Friday 09 January 2026 00:37:27 +0000 (0:00:00.207) 0:00:00.207 ******** 2026-01-09 00:37:33.644154 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:37:33.644169 | orchestrator | 2026-01-09 00:37:33.644180 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-09 00:37:33.644191 | orchestrator | Friday 09 January 2026 00:37:27 +0000 (0:00:00.120) 0:00:00.327 ******** 2026-01-09 00:37:33.644202 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:37:33.644213 | orchestrator | 2026-01-09 00:37:33.644224 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-09 00:37:33.644235 | orchestrator | Friday 09 January 2026 00:37:28 +0000 (0:00:00.951) 0:00:01.278 ******** 2026-01-09 00:37:33.644246 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:37:33.644256 | orchestrator | 2026-01-09 00:37:33.644267 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-09 00:37:33.644278 | orchestrator | 2026-01-09 00:37:33.644289 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-09 00:37:33.644299 | orchestrator | Friday 09 January 2026 00:37:28 +0000 (0:00:00.113) 0:00:01.392 ******** 2026-01-09 00:37:33.644310 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:37:33.644348 | orchestrator | 2026-01-09 00:37:33.644361 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-09 00:37:33.644374 | orchestrator | Friday 09 January 2026 00:37:28 +0000 (0:00:00.105) 0:00:01.497 ******** 2026-01-09 00:37:33.644386 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:37:33.644399 | orchestrator | 2026-01-09 00:37:33.644413 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-09 00:37:33.644425 | orchestrator | Friday 09 January 2026 00:37:29 +0000 (0:00:00.673) 0:00:02.171 ******** 2026-01-09 00:37:33.644438 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:37:33.644451 | orchestrator | 2026-01-09 00:37:33.644464 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-09 00:37:33.644476 | orchestrator | 2026-01-09 00:37:33.644489 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-09 00:37:33.644502 | orchestrator | Friday 09 January 2026 00:37:29 +0000 (0:00:00.119) 0:00:02.290 ******** 2026-01-09 00:37:33.644514 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:37:33.644526 | orchestrator | 2026-01-09 00:37:33.644540 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-09 00:37:33.644552 | orchestrator | Friday 09 January 2026 00:37:29 +0000 (0:00:00.206) 0:00:02.496 ******** 2026-01-09 00:37:33.644564 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:37:33.644576 | orchestrator | 2026-01-09 00:37:33.644589 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-09 00:37:33.644605 | orchestrator | Friday 09 January 2026 00:37:30 +0000 (0:00:00.707) 0:00:03.204 ******** 2026-01-09 00:37:33.644623 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:37:33.644640 | orchestrator | 2026-01-09 00:37:33.644659 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-09 00:37:33.644677 | orchestrator | 2026-01-09 00:37:33.644697 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-09 00:37:33.644717 | orchestrator | Friday 09 January 2026 00:37:30 +0000 (0:00:00.126) 0:00:03.330 ******** 2026-01-09 00:37:33.644737 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:37:33.644756 | orchestrator | 2026-01-09 00:37:33.644774 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-09 00:37:33.644793 | orchestrator | Friday 09 January 2026 00:37:30 +0000 (0:00:00.101) 0:00:03.431 ******** 2026-01-09 00:37:33.644804 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:37:33.644815 | orchestrator | 2026-01-09 00:37:33.644825 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-09 00:37:33.644836 | orchestrator | Friday 09 January 2026 00:37:31 +0000 (0:00:00.673) 0:00:04.105 ******** 2026-01-09 00:37:33.644847 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:37:33.644857 | orchestrator | 2026-01-09 00:37:33.644875 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-09 00:37:33.644895 | orchestrator | 2026-01-09 00:37:33.644915 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-09 00:37:33.644934 | orchestrator | Friday 09 January 2026 00:37:31 +0000 (0:00:00.142) 0:00:04.248 ******** 2026-01-09 00:37:33.644998 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:37:33.645019 | orchestrator | 2026-01-09 00:37:33.645041 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-09 00:37:33.645063 | orchestrator | Friday 09 January 2026 00:37:31 +0000 (0:00:00.105) 0:00:04.353 ******** 2026-01-09 00:37:33.645083 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:37:33.645096 | orchestrator | 2026-01-09 00:37:33.645107 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-09 00:37:33.645135 | orchestrator | Friday 09 January 2026 00:37:32 +0000 (0:00:00.685) 0:00:05.039 ******** 2026-01-09 00:37:33.645146 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:37:33.645157 | orchestrator | 2026-01-09 00:37:33.645168 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-09 00:37:33.645178 | orchestrator | 2026-01-09 00:37:33.645189 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-09 00:37:33.645211 | orchestrator | Friday 09 January 2026 00:37:32 +0000 (0:00:00.105) 0:00:05.144 ******** 2026-01-09 00:37:33.645222 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:37:33.645232 | orchestrator | 2026-01-09 00:37:33.645243 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-09 00:37:33.645254 | orchestrator | Friday 09 January 2026 00:37:32 +0000 (0:00:00.097) 0:00:05.241 ******** 2026-01-09 00:37:33.645264 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:37:33.645275 | orchestrator | 2026-01-09 00:37:33.645286 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-09 00:37:33.645297 | orchestrator | Friday 09 January 2026 00:37:33 +0000 (0:00:00.696) 0:00:05.938 ******** 2026-01-09 00:37:33.645331 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:37:33.645343 | orchestrator | 2026-01-09 00:37:33.645354 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:37:33.645366 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:33.645378 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:33.645389 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:33.645400 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:33.645411 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:33.645422 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:37:33.645432 | orchestrator | 2026-01-09 00:37:33.645443 | orchestrator | 2026-01-09 00:37:33.645454 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:37:33.645465 | orchestrator | Friday 09 January 2026 00:37:33 +0000 (0:00:00.029) 0:00:05.968 ******** 2026-01-09 00:37:33.645476 | orchestrator | =============================================================================== 2026-01-09 00:37:33.645487 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.39s 2026-01-09 00:37:33.645497 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2026-01-09 00:37:33.645508 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2026-01-09 00:37:34.020804 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-09 00:37:46.191621 | orchestrator | 2026-01-09 00:37:46 | INFO  | Task 160048f2-2446-43f4-86ee-cc5e5cc261af (wait-for-connection) was prepared for execution. 2026-01-09 00:37:46.191760 | orchestrator | 2026-01-09 00:37:46 | INFO  | It takes a moment until task 160048f2-2446-43f4-86ee-cc5e5cc261af (wait-for-connection) has been started and output is visible here. 2026-01-09 00:38:02.685763 | orchestrator | 2026-01-09 00:38:02.685879 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-09 00:38:02.685894 | orchestrator | 2026-01-09 00:38:02.685903 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-09 00:38:02.685963 | orchestrator | Friday 09 January 2026 00:37:50 +0000 (0:00:00.237) 0:00:00.237 ******** 2026-01-09 00:38:02.685973 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:38:02.685983 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:38:02.685992 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:38:02.686001 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:38:02.686009 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:38:02.686093 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:38:02.686104 | orchestrator | 2026-01-09 00:38:02.686113 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:38:02.686124 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:38:02.686134 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:38:02.686143 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:38:02.686152 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:38:02.686161 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:38:02.686170 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:38:02.686179 | orchestrator | 2026-01-09 00:38:02.686187 | orchestrator | 2026-01-09 00:38:02.686209 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:38:02.686219 | orchestrator | Friday 09 January 2026 00:38:02 +0000 (0:00:11.656) 0:00:11.893 ******** 2026-01-09 00:38:02.686227 | orchestrator | =============================================================================== 2026-01-09 00:38:02.686236 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.66s 2026-01-09 00:38:03.050228 | orchestrator | + osism apply hddtemp 2026-01-09 00:38:15.114465 | orchestrator | 2026-01-09 00:38:15 | INFO  | Task 58a928d1-f6eb-47c1-860e-ed47b5363af4 (hddtemp) was prepared for execution. 2026-01-09 00:38:15.114593 | orchestrator | 2026-01-09 00:38:15 | INFO  | It takes a moment until task 58a928d1-f6eb-47c1-860e-ed47b5363af4 (hddtemp) has been started and output is visible here. 2026-01-09 00:38:45.894956 | orchestrator | 2026-01-09 00:38:45.895064 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-09 00:38:45.895075 | orchestrator | 2026-01-09 00:38:45.895082 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-09 00:38:45.895090 | orchestrator | Friday 09 January 2026 00:38:19 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-01-09 00:38:45.895097 | orchestrator | ok: [testbed-manager] 2026-01-09 00:38:45.895104 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:38:45.895111 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:38:45.895117 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:38:45.895123 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:38:45.895129 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:38:45.895135 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:38:45.895141 | orchestrator | 2026-01-09 00:38:45.895148 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-09 00:38:45.895154 | orchestrator | Friday 09 January 2026 00:38:20 +0000 (0:00:00.769) 0:00:01.052 ******** 2026-01-09 00:38:45.895164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:38:45.895175 | orchestrator | 2026-01-09 00:38:45.895181 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-09 00:38:45.895187 | orchestrator | Friday 09 January 2026 00:38:21 +0000 (0:00:01.263) 0:00:02.315 ******** 2026-01-09 00:38:45.895194 | orchestrator | ok: [testbed-manager] 2026-01-09 00:38:45.895201 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:38:45.895207 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:38:45.895213 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:38:45.895219 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:38:45.895225 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:38:45.895251 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:38:45.895257 | orchestrator | 2026-01-09 00:38:45.895263 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-09 00:38:45.895269 | orchestrator | Friday 09 January 2026 00:38:24 +0000 (0:00:02.495) 0:00:04.811 ******** 2026-01-09 00:38:45.895276 | orchestrator | changed: [testbed-manager] 2026-01-09 00:38:45.895283 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:38:45.895290 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:38:45.895296 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:38:45.895302 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:38:45.895319 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:38:45.895325 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:38:45.895332 | orchestrator | 2026-01-09 00:38:45.895346 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-09 00:38:45.895352 | orchestrator | Friday 09 January 2026 00:38:25 +0000 (0:00:01.353) 0:00:06.164 ******** 2026-01-09 00:38:45.895358 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:38:45.895364 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:38:45.895370 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:38:45.895376 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:38:45.895382 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:38:45.895388 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:38:45.895394 | orchestrator | ok: [testbed-manager] 2026-01-09 00:38:45.895400 | orchestrator | 2026-01-09 00:38:45.895407 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-09 00:38:45.895413 | orchestrator | Friday 09 January 2026 00:38:26 +0000 (0:00:01.260) 0:00:07.425 ******** 2026-01-09 00:38:45.895429 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:38:45.895436 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:38:45.895443 | orchestrator | changed: [testbed-manager] 2026-01-09 00:38:45.895458 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:38:45.895465 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:38:45.895472 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:38:45.895479 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:38:45.895486 | orchestrator | 2026-01-09 00:38:45.895494 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-09 00:38:45.895501 | orchestrator | Friday 09 January 2026 00:38:27 +0000 (0:00:00.869) 0:00:08.295 ******** 2026-01-09 00:38:45.895508 | orchestrator | changed: [testbed-manager] 2026-01-09 00:38:45.895516 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:38:45.895523 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:38:45.895534 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:38:45.895545 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:38:45.895552 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:38:45.895559 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:38:45.895566 | orchestrator | 2026-01-09 00:38:45.895573 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-09 00:38:45.895580 | orchestrator | Friday 09 January 2026 00:38:42 +0000 (0:00:14.522) 0:00:22.818 ******** 2026-01-09 00:38:45.895587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:38:45.895595 | orchestrator | 2026-01-09 00:38:45.895601 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-09 00:38:45.895621 | orchestrator | Friday 09 January 2026 00:38:43 +0000 (0:00:01.372) 0:00:24.190 ******** 2026-01-09 00:38:45.895628 | orchestrator | changed: [testbed-manager] 2026-01-09 00:38:45.895635 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:38:45.895642 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:38:45.895649 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:38:45.895656 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:38:45.895663 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:38:45.895670 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:38:45.895683 | orchestrator | 2026-01-09 00:38:45.895690 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:38:45.895697 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:38:45.895721 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:38:45.895729 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:38:45.895736 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:38:45.895744 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:38:45.895751 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:38:45.895758 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:38:45.895766 | orchestrator | 2026-01-09 00:38:45.895773 | orchestrator | 2026-01-09 00:38:45.895781 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:38:45.895788 | orchestrator | Friday 09 January 2026 00:38:45 +0000 (0:00:01.980) 0:00:26.171 ******** 2026-01-09 00:38:45.895794 | orchestrator | =============================================================================== 2026-01-09 00:38:45.895800 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.52s 2026-01-09 00:38:45.895807 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.50s 2026-01-09 00:38:45.895813 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.98s 2026-01-09 00:38:45.895819 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.37s 2026-01-09 00:38:45.895825 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.35s 2026-01-09 00:38:45.895831 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.26s 2026-01-09 00:38:45.895837 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.26s 2026-01-09 00:38:45.895843 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.87s 2026-01-09 00:38:45.895849 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.77s 2026-01-09 00:38:46.254345 | orchestrator | ++ semver latest 7.1.1 2026-01-09 00:38:46.301280 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-09 00:38:46.301375 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-09 00:38:46.301389 | orchestrator | + sudo systemctl restart manager.service 2026-01-09 00:38:59.490628 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-09 00:38:59.490763 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-09 00:38:59.490778 | orchestrator | + local max_attempts=60 2026-01-09 00:38:59.490789 | orchestrator | + local name=ceph-ansible 2026-01-09 00:38:59.490799 | orchestrator | + local attempt_num=1 2026-01-09 00:38:59.490810 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:38:59.529401 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:38:59.529512 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:38:59.529528 | orchestrator | + sleep 5 2026-01-09 00:39:04.535653 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:39:04.573141 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:39:04.573230 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:39:04.573244 | orchestrator | + sleep 5 2026-01-09 00:39:09.577779 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:39:09.626665 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:39:09.626808 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:39:09.626825 | orchestrator | + sleep 5 2026-01-09 00:39:14.631916 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:39:14.668808 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:39:14.668902 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:39:14.668915 | orchestrator | + sleep 5 2026-01-09 00:39:19.673909 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:39:19.713736 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:39:19.713862 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:39:19.713876 | orchestrator | + sleep 5 2026-01-09 00:39:24.718401 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:39:24.751935 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:39:24.752045 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:39:24.752069 | orchestrator | + sleep 5 2026-01-09 00:39:29.757604 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:39:29.798907 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:39:29.799019 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:39:29.799039 | orchestrator | + sleep 5 2026-01-09 00:39:34.804033 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:39:34.833466 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-09 00:39:34.833547 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:39:34.833562 | orchestrator | + sleep 5 2026-01-09 00:39:39.837170 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:39:39.945155 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-09 00:39:39.945248 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:39:39.945256 | orchestrator | + sleep 5 2026-01-09 00:39:44.943924 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:39:44.982247 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-09 00:39:44.982328 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:39:44.982341 | orchestrator | + sleep 5 2026-01-09 00:39:49.987366 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:39:50.025317 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-09 00:39:50.025421 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:39:50.025430 | orchestrator | + sleep 5 2026-01-09 00:39:55.029437 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:39:55.072989 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-09 00:39:55.073083 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:39:55.073094 | orchestrator | + sleep 5 2026-01-09 00:40:00.077862 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:40:00.119540 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-09 00:40:00.119633 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-09 00:40:00.119643 | orchestrator | + sleep 5 2026-01-09 00:40:05.124130 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-09 00:40:05.167224 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:40:05.167286 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-09 00:40:05.167295 | orchestrator | + local max_attempts=60 2026-01-09 00:40:05.167303 | orchestrator | + local name=kolla-ansible 2026-01-09 00:40:05.167310 | orchestrator | + local attempt_num=1 2026-01-09 00:40:05.168410 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-09 00:40:05.206466 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:40:05.206518 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-09 00:40:05.206524 | orchestrator | + local max_attempts=60 2026-01-09 00:40:05.206530 | orchestrator | + local name=osism-ansible 2026-01-09 00:40:05.206534 | orchestrator | + local attempt_num=1 2026-01-09 00:40:05.207543 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-09 00:40:05.246286 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-09 00:40:05.246387 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-09 00:40:05.246413 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-09 00:40:05.436835 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-09 00:40:05.629081 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-09 00:40:05.808593 | orchestrator | ARA in osism-ansible already disabled. 2026-01-09 00:40:05.949840 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-09 00:40:05.950188 | orchestrator | + osism apply gather-facts 2026-01-09 00:40:18.229455 | orchestrator | 2026-01-09 00:40:18 | INFO  | Task 51be943e-8a41-41b8-8cca-010ec256d414 (gather-facts) was prepared for execution. 2026-01-09 00:40:18.229563 | orchestrator | 2026-01-09 00:40:18 | INFO  | It takes a moment until task 51be943e-8a41-41b8-8cca-010ec256d414 (gather-facts) has been started and output is visible here. 2026-01-09 00:40:32.479060 | orchestrator | 2026-01-09 00:40:32.479178 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-09 00:40:32.479196 | orchestrator | 2026-01-09 00:40:32.479208 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-09 00:40:32.479220 | orchestrator | Friday 09 January 2026 00:40:22 +0000 (0:00:00.220) 0:00:00.220 ******** 2026-01-09 00:40:32.479231 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:40:32.479243 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:40:32.479254 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:40:32.479265 | orchestrator | ok: [testbed-manager] 2026-01-09 00:40:32.479275 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:40:32.479286 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:40:32.479297 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:40:32.479307 | orchestrator | 2026-01-09 00:40:32.479318 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-09 00:40:32.479329 | orchestrator | 2026-01-09 00:40:32.479340 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-09 00:40:32.479351 | orchestrator | Friday 09 January 2026 00:40:31 +0000 (0:00:09.027) 0:00:09.247 ******** 2026-01-09 00:40:32.479362 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:40:32.479374 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:40:32.479384 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:40:32.479395 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:40:32.479406 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:40:32.479417 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:40:32.479427 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:40:32.479438 | orchestrator | 2026-01-09 00:40:32.479449 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:40:32.479460 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:40:32.479472 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:40:32.479483 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:40:32.479494 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:40:32.479505 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:40:32.479516 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:40:32.479527 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 00:40:32.479538 | orchestrator | 2026-01-09 00:40:32.479549 | orchestrator | 2026-01-09 00:40:32.479560 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:40:32.479587 | orchestrator | Friday 09 January 2026 00:40:32 +0000 (0:00:00.625) 0:00:09.873 ******** 2026-01-09 00:40:32.479601 | orchestrator | =============================================================================== 2026-01-09 00:40:32.479615 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.03s 2026-01-09 00:40:32.479648 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-01-09 00:40:32.823351 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-09 00:40:32.838473 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-09 00:40:32.850703 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-09 00:40:32.862635 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-09 00:40:32.876253 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-09 00:40:32.892465 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-09 00:40:32.906757 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-09 00:40:32.919929 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-09 00:40:32.934346 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-09 00:40:32.948342 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-09 00:40:32.962221 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-09 00:40:32.975503 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-09 00:40:32.989085 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-09 00:40:33.003854 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-09 00:40:33.026146 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-09 00:40:33.038171 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-09 00:40:33.051384 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-09 00:40:33.065536 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-09 00:40:33.086637 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-09 00:40:33.105487 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-09 00:40:33.121842 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-09 00:40:33.512442 | orchestrator | ok: Runtime: 0:25:18.056592 2026-01-09 00:40:33.654191 | 2026-01-09 00:40:33.654344 | TASK [Deploy services] 2026-01-09 00:40:34.190155 | orchestrator | skipping: Conditional result was False 2026-01-09 00:40:34.211191 | 2026-01-09 00:40:34.211412 | TASK [Deploy in a nutshell] 2026-01-09 00:40:34.993369 | orchestrator | + set -e 2026-01-09 00:40:34.993566 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-09 00:40:34.993591 | orchestrator | ++ export INTERACTIVE=false 2026-01-09 00:40:34.993613 | orchestrator | ++ INTERACTIVE=false 2026-01-09 00:40:34.993628 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-09 00:40:34.994698 | orchestrator | 2026-01-09 00:40:34.994724 | orchestrator | # PULL IMAGES 2026-01-09 00:40:34.994797 | orchestrator | 2026-01-09 00:40:34.994823 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-09 00:40:34.994839 | orchestrator | + source /opt/manager-vars.sh 2026-01-09 00:40:34.994851 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-09 00:40:34.994869 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-09 00:40:34.994881 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-09 00:40:34.994899 | orchestrator | ++ CEPH_VERSION=reef 2026-01-09 00:40:34.994910 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-09 00:40:34.994929 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-09 00:40:34.994940 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-09 00:40:34.994955 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-09 00:40:34.994966 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-09 00:40:34.994979 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-09 00:40:34.994990 | orchestrator | ++ export ARA=false 2026-01-09 00:40:34.995001 | orchestrator | ++ ARA=false 2026-01-09 00:40:34.995026 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-09 00:40:34.995048 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-09 00:40:34.995059 | orchestrator | ++ export TEMPEST=true 2026-01-09 00:40:34.995070 | orchestrator | ++ TEMPEST=true 2026-01-09 00:40:34.995081 | orchestrator | ++ export IS_ZUUL=true 2026-01-09 00:40:34.995091 | orchestrator | ++ IS_ZUUL=true 2026-01-09 00:40:34.995103 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.67 2026-01-09 00:40:34.995114 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.67 2026-01-09 00:40:34.995124 | orchestrator | ++ export EXTERNAL_API=false 2026-01-09 00:40:34.995135 | orchestrator | ++ EXTERNAL_API=false 2026-01-09 00:40:34.995146 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-09 00:40:34.995157 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-09 00:40:34.995168 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-09 00:40:34.995178 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-09 00:40:34.995190 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-09 00:40:34.995201 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-09 00:40:34.995212 | orchestrator | + echo 2026-01-09 00:40:34.995223 | orchestrator | + echo '# PULL IMAGES' 2026-01-09 00:40:34.995234 | orchestrator | + echo 2026-01-09 00:40:34.995251 | orchestrator | ++ semver latest 7.0.0 2026-01-09 00:40:35.062312 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-09 00:40:35.062438 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-09 00:40:35.062454 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-09 00:40:36.879082 | orchestrator | 2026-01-09 00:40:36 | INFO  | Trying to run play pull-images in environment custom 2026-01-09 00:40:46.992530 | orchestrator | 2026-01-09 00:40:46 | INFO  | Task aeba78c6-2089-4274-ba8d-040a338ba3aa (pull-images) was prepared for execution. 2026-01-09 00:40:46.992661 | orchestrator | 2026-01-09 00:40:46 | INFO  | Task aeba78c6-2089-4274-ba8d-040a338ba3aa is running in background. No more output. Check ARA for logs. 2026-01-09 00:40:49.525055 | orchestrator | 2026-01-09 00:40:49 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-09 00:40:59.687325 | orchestrator | 2026-01-09 00:40:59 | INFO  | Task c8a37bb2-62f2-41f8-ab07-c3a2c8862eba (wipe-partitions) was prepared for execution. 2026-01-09 00:40:59.687426 | orchestrator | 2026-01-09 00:40:59 | INFO  | It takes a moment until task c8a37bb2-62f2-41f8-ab07-c3a2c8862eba (wipe-partitions) has been started and output is visible here. 2026-01-09 00:41:13.411644 | orchestrator | 2026-01-09 00:41:13.411799 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-09 00:41:13.411828 | orchestrator | 2026-01-09 00:41:13.411845 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-09 00:41:13.411866 | orchestrator | Friday 09 January 2026 00:41:03 +0000 (0:00:00.119) 0:00:00.119 ******** 2026-01-09 00:41:13.411878 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:41:13.411890 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:41:13.411902 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:41:13.411914 | orchestrator | 2026-01-09 00:41:13.411925 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-09 00:41:13.411966 | orchestrator | Friday 09 January 2026 00:41:04 +0000 (0:00:00.593) 0:00:00.713 ******** 2026-01-09 00:41:13.411978 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:13.411989 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:41:13.412005 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:41:13.412016 | orchestrator | 2026-01-09 00:41:13.412027 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-09 00:41:13.412038 | orchestrator | Friday 09 January 2026 00:41:04 +0000 (0:00:00.347) 0:00:01.060 ******** 2026-01-09 00:41:13.412049 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:41:13.412061 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:41:13.412072 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:41:13.412083 | orchestrator | 2026-01-09 00:41:13.412094 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-09 00:41:13.412105 | orchestrator | Friday 09 January 2026 00:41:05 +0000 (0:00:00.601) 0:00:01.662 ******** 2026-01-09 00:41:13.412116 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:13.412126 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:41:13.412137 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:41:13.412148 | orchestrator | 2026-01-09 00:41:13.412161 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-09 00:41:13.412173 | orchestrator | Friday 09 January 2026 00:41:05 +0000 (0:00:00.225) 0:00:01.888 ******** 2026-01-09 00:41:13.412186 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-09 00:41:13.412203 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-09 00:41:13.412216 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-09 00:41:13.412229 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-09 00:41:13.412242 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-09 00:41:13.412254 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-09 00:41:13.412267 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-09 00:41:13.412280 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-09 00:41:13.412293 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-09 00:41:13.412305 | orchestrator | 2026-01-09 00:41:13.412318 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-09 00:41:13.412332 | orchestrator | Friday 09 January 2026 00:41:07 +0000 (0:00:02.273) 0:00:04.161 ******** 2026-01-09 00:41:13.412344 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-09 00:41:13.412357 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-09 00:41:13.412370 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-09 00:41:13.412383 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-09 00:41:13.412415 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-09 00:41:13.412439 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-09 00:41:13.412452 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-09 00:41:13.412465 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-09 00:41:13.412477 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-09 00:41:13.412490 | orchestrator | 2026-01-09 00:41:13.412503 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-09 00:41:13.412514 | orchestrator | Friday 09 January 2026 00:41:09 +0000 (0:00:01.569) 0:00:05.730 ******** 2026-01-09 00:41:13.412525 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-09 00:41:13.412536 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-09 00:41:13.412546 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-09 00:41:13.412557 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-09 00:41:13.412568 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-09 00:41:13.412579 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-09 00:41:13.412590 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-09 00:41:13.412608 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-09 00:41:13.412627 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-09 00:41:13.412639 | orchestrator | 2026-01-09 00:41:13.412650 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-09 00:41:13.412660 | orchestrator | Friday 09 January 2026 00:41:11 +0000 (0:00:02.180) 0:00:07.911 ******** 2026-01-09 00:41:13.412671 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:41:13.412682 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:41:13.412693 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:41:13.412703 | orchestrator | 2026-01-09 00:41:13.412714 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-09 00:41:13.412725 | orchestrator | Friday 09 January 2026 00:41:12 +0000 (0:00:00.632) 0:00:08.543 ******** 2026-01-09 00:41:13.412825 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:41:13.412838 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:41:13.412849 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:41:13.412860 | orchestrator | 2026-01-09 00:41:13.412871 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:41:13.412885 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:41:13.412897 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:41:13.412928 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:41:13.412939 | orchestrator | 2026-01-09 00:41:13.412949 | orchestrator | 2026-01-09 00:41:13.412959 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:41:13.412968 | orchestrator | Friday 09 January 2026 00:41:13 +0000 (0:00:00.650) 0:00:09.193 ******** 2026-01-09 00:41:13.412978 | orchestrator | =============================================================================== 2026-01-09 00:41:13.412988 | orchestrator | Check device availability ----------------------------------------------- 2.27s 2026-01-09 00:41:13.412997 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.18s 2026-01-09 00:41:13.413007 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2026-01-09 00:41:13.413017 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2026-01-09 00:41:13.413026 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-01-09 00:41:13.413036 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2026-01-09 00:41:13.413045 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-01-09 00:41:13.413055 | orchestrator | Remove all rook related logical devices --------------------------------- 0.35s 2026-01-09 00:41:13.413065 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2026-01-09 00:41:25.962769 | orchestrator | 2026-01-09 00:41:25 | INFO  | Task 4e584a1e-4547-4fbd-a23e-c83bd68164cf (facts) was prepared for execution. 2026-01-09 00:41:25.962961 | orchestrator | 2026-01-09 00:41:25 | INFO  | It takes a moment until task 4e584a1e-4547-4fbd-a23e-c83bd68164cf (facts) has been started and output is visible here. 2026-01-09 00:41:38.850251 | orchestrator | 2026-01-09 00:41:38.850381 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-09 00:41:38.850398 | orchestrator | 2026-01-09 00:41:38.850411 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-09 00:41:38.850423 | orchestrator | Friday 09 January 2026 00:41:30 +0000 (0:00:00.283) 0:00:00.283 ******** 2026-01-09 00:41:38.850435 | orchestrator | ok: [testbed-manager] 2026-01-09 00:41:38.850447 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:41:38.850459 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:41:38.850495 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:41:38.850506 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:41:38.850517 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:41:38.850528 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:41:38.850539 | orchestrator | 2026-01-09 00:41:38.850550 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-09 00:41:38.850561 | orchestrator | Friday 09 January 2026 00:41:31 +0000 (0:00:01.168) 0:00:01.452 ******** 2026-01-09 00:41:38.850572 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:41:38.850583 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:41:38.850594 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:41:38.850605 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:41:38.850616 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:38.850627 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:41:38.850638 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:41:38.850648 | orchestrator | 2026-01-09 00:41:38.850659 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-09 00:41:38.850670 | orchestrator | 2026-01-09 00:41:38.850698 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-09 00:41:38.850710 | orchestrator | Friday 09 January 2026 00:41:32 +0000 (0:00:01.356) 0:00:02.808 ******** 2026-01-09 00:41:38.850751 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:41:38.850762 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:41:38.850777 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:41:38.850790 | orchestrator | ok: [testbed-manager] 2026-01-09 00:41:38.850803 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:41:38.850815 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:41:38.850828 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:41:38.850840 | orchestrator | 2026-01-09 00:41:38.850853 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-09 00:41:38.850866 | orchestrator | 2026-01-09 00:41:38.850879 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-09 00:41:38.850891 | orchestrator | Friday 09 January 2026 00:41:37 +0000 (0:00:04.945) 0:00:07.754 ******** 2026-01-09 00:41:38.850905 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:41:38.850917 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:41:38.850931 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:41:38.850944 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:41:38.850957 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:38.850969 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:41:38.850981 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:41:38.850994 | orchestrator | 2026-01-09 00:41:38.851006 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:41:38.851018 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:41:38.851032 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:41:38.851043 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:41:38.851054 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:41:38.851065 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:41:38.851076 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:41:38.851087 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:41:38.851098 | orchestrator | 2026-01-09 00:41:38.851117 | orchestrator | 2026-01-09 00:41:38.851128 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:41:38.851139 | orchestrator | Friday 09 January 2026 00:41:38 +0000 (0:00:00.532) 0:00:08.286 ******** 2026-01-09 00:41:38.851150 | orchestrator | =============================================================================== 2026-01-09 00:41:38.851161 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.95s 2026-01-09 00:41:38.851172 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2026-01-09 00:41:38.851183 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2026-01-09 00:41:38.851195 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-01-09 00:41:41.318431 | orchestrator | 2026-01-09 00:41:41 | INFO  | Task 69bd0771-94ce-405b-8d92-8e37df4871bd (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-09 00:41:41.320228 | orchestrator | 2026-01-09 00:41:41 | INFO  | It takes a moment until task 69bd0771-94ce-405b-8d92-8e37df4871bd (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-09 00:41:53.620266 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-09 00:41:53.620363 | orchestrator | 2.16.14 2026-01-09 00:41:53.620372 | orchestrator | 2026-01-09 00:41:53.620380 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-09 00:41:53.620386 | orchestrator | 2026-01-09 00:41:53.620392 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-09 00:41:53.620413 | orchestrator | Friday 09 January 2026 00:41:46 +0000 (0:00:00.334) 0:00:00.334 ******** 2026-01-09 00:41:53.620421 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-09 00:41:53.620433 | orchestrator | 2026-01-09 00:41:53.620441 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-09 00:41:53.620451 | orchestrator | Friday 09 January 2026 00:41:46 +0000 (0:00:00.279) 0:00:00.614 ******** 2026-01-09 00:41:53.620460 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:41:53.620468 | orchestrator | 2026-01-09 00:41:53.620474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.620480 | orchestrator | Friday 09 January 2026 00:41:46 +0000 (0:00:00.259) 0:00:00.873 ******** 2026-01-09 00:41:53.620487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-09 00:41:53.620501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-09 00:41:53.620508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-09 00:41:53.620513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-09 00:41:53.620519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-09 00:41:53.620525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-09 00:41:53.620530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-09 00:41:53.620536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-09 00:41:53.620542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-09 00:41:53.620548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-09 00:41:53.620553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-09 00:41:53.620559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-09 00:41:53.620564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-09 00:41:53.620570 | orchestrator | 2026-01-09 00:41:53.620575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.620599 | orchestrator | Friday 09 January 2026 00:41:47 +0000 (0:00:00.470) 0:00:01.344 ******** 2026-01-09 00:41:53.620605 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.620610 | orchestrator | 2026-01-09 00:41:53.620616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.620621 | orchestrator | Friday 09 January 2026 00:41:47 +0000 (0:00:00.208) 0:00:01.553 ******** 2026-01-09 00:41:53.620627 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.620632 | orchestrator | 2026-01-09 00:41:53.620638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.620644 | orchestrator | Friday 09 January 2026 00:41:47 +0000 (0:00:00.201) 0:00:01.754 ******** 2026-01-09 00:41:53.620760 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.620767 | orchestrator | 2026-01-09 00:41:53.620773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.620782 | orchestrator | Friday 09 January 2026 00:41:47 +0000 (0:00:00.200) 0:00:01.954 ******** 2026-01-09 00:41:53.620788 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.620793 | orchestrator | 2026-01-09 00:41:53.620799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.620806 | orchestrator | Friday 09 January 2026 00:41:47 +0000 (0:00:00.193) 0:00:02.148 ******** 2026-01-09 00:41:53.620813 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.620819 | orchestrator | 2026-01-09 00:41:53.620826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.620832 | orchestrator | Friday 09 January 2026 00:41:48 +0000 (0:00:00.202) 0:00:02.351 ******** 2026-01-09 00:41:53.620838 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.620845 | orchestrator | 2026-01-09 00:41:53.620861 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.620874 | orchestrator | Friday 09 January 2026 00:41:48 +0000 (0:00:00.202) 0:00:02.554 ******** 2026-01-09 00:41:53.620881 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.620887 | orchestrator | 2026-01-09 00:41:53.620893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.620900 | orchestrator | Friday 09 January 2026 00:41:48 +0000 (0:00:00.211) 0:00:02.765 ******** 2026-01-09 00:41:53.620906 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.620913 | orchestrator | 2026-01-09 00:41:53.620919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.620926 | orchestrator | Friday 09 January 2026 00:41:48 +0000 (0:00:00.200) 0:00:02.966 ******** 2026-01-09 00:41:53.620933 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7) 2026-01-09 00:41:53.620941 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7) 2026-01-09 00:41:53.620947 | orchestrator | 2026-01-09 00:41:53.620953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.620974 | orchestrator | Friday 09 January 2026 00:41:49 +0000 (0:00:00.387) 0:00:03.354 ******** 2026-01-09 00:41:53.620980 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766) 2026-01-09 00:41:53.620992 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766) 2026-01-09 00:41:53.621088 | orchestrator | 2026-01-09 00:41:53.621097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.621104 | orchestrator | Friday 09 January 2026 00:41:49 +0000 (0:00:00.644) 0:00:03.998 ******** 2026-01-09 00:41:53.621110 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8) 2026-01-09 00:41:53.621117 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8) 2026-01-09 00:41:53.621123 | orchestrator | 2026-01-09 00:41:53.621130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.621144 | orchestrator | Friday 09 January 2026 00:41:50 +0000 (0:00:00.733) 0:00:04.731 ******** 2026-01-09 00:41:53.621151 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d) 2026-01-09 00:41:53.621157 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d) 2026-01-09 00:41:53.621164 | orchestrator | 2026-01-09 00:41:53.621171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:41:53.621197 | orchestrator | Friday 09 January 2026 00:41:51 +0000 (0:00:00.928) 0:00:05.660 ******** 2026-01-09 00:41:53.621204 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-09 00:41:53.621209 | orchestrator | 2026-01-09 00:41:53.621214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:41:53.621220 | orchestrator | Friday 09 January 2026 00:41:51 +0000 (0:00:00.338) 0:00:05.998 ******** 2026-01-09 00:41:53.621225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-09 00:41:53.621231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-09 00:41:53.621236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-09 00:41:53.621241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-09 00:41:53.621247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-09 00:41:53.621252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-09 00:41:53.621258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-09 00:41:53.621263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-09 00:41:53.621269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-09 00:41:53.621274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-09 00:41:53.621279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-09 00:41:53.621285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-09 00:41:53.621290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-09 00:41:53.621295 | orchestrator | 2026-01-09 00:41:53.621301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:41:53.621306 | orchestrator | Friday 09 January 2026 00:41:52 +0000 (0:00:00.419) 0:00:06.417 ******** 2026-01-09 00:41:53.621312 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.621317 | orchestrator | 2026-01-09 00:41:53.621323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:41:53.621328 | orchestrator | Friday 09 January 2026 00:41:52 +0000 (0:00:00.220) 0:00:06.638 ******** 2026-01-09 00:41:53.621334 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.621339 | orchestrator | 2026-01-09 00:41:53.621344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:41:53.621350 | orchestrator | Friday 09 January 2026 00:41:52 +0000 (0:00:00.211) 0:00:06.850 ******** 2026-01-09 00:41:53.621355 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.621361 | orchestrator | 2026-01-09 00:41:53.621366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:41:53.621372 | orchestrator | Friday 09 January 2026 00:41:52 +0000 (0:00:00.211) 0:00:07.062 ******** 2026-01-09 00:41:53.621377 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.621383 | orchestrator | 2026-01-09 00:41:53.621388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:41:53.621394 | orchestrator | Friday 09 January 2026 00:41:52 +0000 (0:00:00.216) 0:00:07.278 ******** 2026-01-09 00:41:53.621405 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.621411 | orchestrator | 2026-01-09 00:41:53.621416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:41:53.621422 | orchestrator | Friday 09 January 2026 00:41:53 +0000 (0:00:00.218) 0:00:07.496 ******** 2026-01-09 00:41:53.621427 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.621433 | orchestrator | 2026-01-09 00:41:53.621438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:41:53.621444 | orchestrator | Friday 09 January 2026 00:41:53 +0000 (0:00:00.216) 0:00:07.712 ******** 2026-01-09 00:41:53.621449 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:41:53.621454 | orchestrator | 2026-01-09 00:41:53.621465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:01.600644 | orchestrator | Friday 09 January 2026 00:41:53 +0000 (0:00:00.218) 0:00:07.931 ******** 2026-01-09 00:42:01.600817 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.600834 | orchestrator | 2026-01-09 00:42:01.600848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:01.600860 | orchestrator | Friday 09 January 2026 00:41:53 +0000 (0:00:00.209) 0:00:08.140 ******** 2026-01-09 00:42:01.600871 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-09 00:42:01.600902 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-09 00:42:01.600914 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-09 00:42:01.600925 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-09 00:42:01.600936 | orchestrator | 2026-01-09 00:42:01.600948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:01.600960 | orchestrator | Friday 09 January 2026 00:41:54 +0000 (0:00:01.124) 0:00:09.265 ******** 2026-01-09 00:42:01.600971 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.600982 | orchestrator | 2026-01-09 00:42:01.600992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:01.601003 | orchestrator | Friday 09 January 2026 00:41:55 +0000 (0:00:00.239) 0:00:09.504 ******** 2026-01-09 00:42:01.601014 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.601025 | orchestrator | 2026-01-09 00:42:01.601036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:01.601047 | orchestrator | Friday 09 January 2026 00:41:55 +0000 (0:00:00.215) 0:00:09.720 ******** 2026-01-09 00:42:01.601058 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.601069 | orchestrator | 2026-01-09 00:42:01.601080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:01.601091 | orchestrator | Friday 09 January 2026 00:41:55 +0000 (0:00:00.216) 0:00:09.936 ******** 2026-01-09 00:42:01.601102 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.601113 | orchestrator | 2026-01-09 00:42:01.601124 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-09 00:42:01.601135 | orchestrator | Friday 09 January 2026 00:41:55 +0000 (0:00:00.202) 0:00:10.139 ******** 2026-01-09 00:42:01.601146 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-09 00:42:01.601157 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-09 00:42:01.601170 | orchestrator | 2026-01-09 00:42:01.601183 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-09 00:42:01.601197 | orchestrator | Friday 09 January 2026 00:41:56 +0000 (0:00:00.184) 0:00:10.324 ******** 2026-01-09 00:42:01.601210 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.601223 | orchestrator | 2026-01-09 00:42:01.601236 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-09 00:42:01.601249 | orchestrator | Friday 09 January 2026 00:41:56 +0000 (0:00:00.142) 0:00:10.466 ******** 2026-01-09 00:42:01.601262 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.601275 | orchestrator | 2026-01-09 00:42:01.601288 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-09 00:42:01.601321 | orchestrator | Friday 09 January 2026 00:41:56 +0000 (0:00:00.151) 0:00:10.618 ******** 2026-01-09 00:42:01.601333 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.601347 | orchestrator | 2026-01-09 00:42:01.601360 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-09 00:42:01.601373 | orchestrator | Friday 09 January 2026 00:41:56 +0000 (0:00:00.173) 0:00:10.792 ******** 2026-01-09 00:42:01.601386 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:42:01.601399 | orchestrator | 2026-01-09 00:42:01.601413 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-09 00:42:01.601426 | orchestrator | Friday 09 January 2026 00:41:56 +0000 (0:00:00.137) 0:00:10.929 ******** 2026-01-09 00:42:01.601439 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8cf949ba-669c-5e80-aece-22faa35a4e96'}}) 2026-01-09 00:42:01.601452 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '827da1a7-5d25-503a-baf6-83b57b40e5ca'}}) 2026-01-09 00:42:01.601465 | orchestrator | 2026-01-09 00:42:01.601482 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-09 00:42:01.601504 | orchestrator | Friday 09 January 2026 00:41:56 +0000 (0:00:00.176) 0:00:11.105 ******** 2026-01-09 00:42:01.601524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8cf949ba-669c-5e80-aece-22faa35a4e96'}})  2026-01-09 00:42:01.601553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '827da1a7-5d25-503a-baf6-83b57b40e5ca'}})  2026-01-09 00:42:01.601572 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.601590 | orchestrator | 2026-01-09 00:42:01.601602 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-09 00:42:01.601613 | orchestrator | Friday 09 January 2026 00:41:56 +0000 (0:00:00.156) 0:00:11.262 ******** 2026-01-09 00:42:01.601624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8cf949ba-669c-5e80-aece-22faa35a4e96'}})  2026-01-09 00:42:01.601635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '827da1a7-5d25-503a-baf6-83b57b40e5ca'}})  2026-01-09 00:42:01.601646 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.601657 | orchestrator | 2026-01-09 00:42:01.601668 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-09 00:42:01.601679 | orchestrator | Friday 09 January 2026 00:41:57 +0000 (0:00:00.373) 0:00:11.636 ******** 2026-01-09 00:42:01.601691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8cf949ba-669c-5e80-aece-22faa35a4e96'}})  2026-01-09 00:42:01.601815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '827da1a7-5d25-503a-baf6-83b57b40e5ca'}})  2026-01-09 00:42:01.601829 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.601840 | orchestrator | 2026-01-09 00:42:01.601851 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-09 00:42:01.601862 | orchestrator | Friday 09 January 2026 00:41:57 +0000 (0:00:00.163) 0:00:11.799 ******** 2026-01-09 00:42:01.601873 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:42:01.601884 | orchestrator | 2026-01-09 00:42:01.601895 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-09 00:42:01.601906 | orchestrator | Friday 09 January 2026 00:41:57 +0000 (0:00:00.155) 0:00:11.955 ******** 2026-01-09 00:42:01.601917 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:42:01.601928 | orchestrator | 2026-01-09 00:42:01.601938 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-09 00:42:01.601950 | orchestrator | Friday 09 January 2026 00:41:57 +0000 (0:00:00.137) 0:00:12.093 ******** 2026-01-09 00:42:01.601960 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.601971 | orchestrator | 2026-01-09 00:42:01.601982 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-09 00:42:01.601993 | orchestrator | Friday 09 January 2026 00:41:57 +0000 (0:00:00.142) 0:00:12.236 ******** 2026-01-09 00:42:01.602079 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.602092 | orchestrator | 2026-01-09 00:42:01.602103 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-09 00:42:01.602114 | orchestrator | Friday 09 January 2026 00:41:58 +0000 (0:00:00.168) 0:00:12.404 ******** 2026-01-09 00:42:01.602125 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.602136 | orchestrator | 2026-01-09 00:42:01.602147 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-09 00:42:01.602158 | orchestrator | Friday 09 January 2026 00:41:58 +0000 (0:00:00.146) 0:00:12.551 ******** 2026-01-09 00:42:01.602168 | orchestrator | ok: [testbed-node-3] => { 2026-01-09 00:42:01.602179 | orchestrator |  "ceph_osd_devices": { 2026-01-09 00:42:01.602190 | orchestrator |  "sdb": { 2026-01-09 00:42:01.602202 | orchestrator |  "osd_lvm_uuid": "8cf949ba-669c-5e80-aece-22faa35a4e96" 2026-01-09 00:42:01.602213 | orchestrator |  }, 2026-01-09 00:42:01.602224 | orchestrator |  "sdc": { 2026-01-09 00:42:01.602235 | orchestrator |  "osd_lvm_uuid": "827da1a7-5d25-503a-baf6-83b57b40e5ca" 2026-01-09 00:42:01.602245 | orchestrator |  } 2026-01-09 00:42:01.602256 | orchestrator |  } 2026-01-09 00:42:01.602267 | orchestrator | } 2026-01-09 00:42:01.602278 | orchestrator | 2026-01-09 00:42:01.602289 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-09 00:42:01.602308 | orchestrator | Friday 09 January 2026 00:41:58 +0000 (0:00:00.154) 0:00:12.705 ******** 2026-01-09 00:42:01.602319 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.602330 | orchestrator | 2026-01-09 00:42:01.602341 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-09 00:42:01.602352 | orchestrator | Friday 09 January 2026 00:41:58 +0000 (0:00:00.161) 0:00:12.867 ******** 2026-01-09 00:42:01.602362 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.602373 | orchestrator | 2026-01-09 00:42:01.602384 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-09 00:42:01.602395 | orchestrator | Friday 09 January 2026 00:41:58 +0000 (0:00:00.138) 0:00:13.005 ******** 2026-01-09 00:42:01.602406 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:42:01.602416 | orchestrator | 2026-01-09 00:42:01.602427 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-09 00:42:01.602438 | orchestrator | Friday 09 January 2026 00:41:58 +0000 (0:00:00.166) 0:00:13.171 ******** 2026-01-09 00:42:01.602449 | orchestrator | changed: [testbed-node-3] => { 2026-01-09 00:42:01.602459 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-09 00:42:01.602470 | orchestrator |  "ceph_osd_devices": { 2026-01-09 00:42:01.602484 | orchestrator |  "sdb": { 2026-01-09 00:42:01.602504 | orchestrator |  "osd_lvm_uuid": "8cf949ba-669c-5e80-aece-22faa35a4e96" 2026-01-09 00:42:01.602517 | orchestrator |  }, 2026-01-09 00:42:01.602528 | orchestrator |  "sdc": { 2026-01-09 00:42:01.602539 | orchestrator |  "osd_lvm_uuid": "827da1a7-5d25-503a-baf6-83b57b40e5ca" 2026-01-09 00:42:01.602550 | orchestrator |  } 2026-01-09 00:42:01.602560 | orchestrator |  }, 2026-01-09 00:42:01.602571 | orchestrator |  "lvm_volumes": [ 2026-01-09 00:42:01.602582 | orchestrator |  { 2026-01-09 00:42:01.602593 | orchestrator |  "data": "osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96", 2026-01-09 00:42:01.602604 | orchestrator |  "data_vg": "ceph-8cf949ba-669c-5e80-aece-22faa35a4e96" 2026-01-09 00:42:01.602615 | orchestrator |  }, 2026-01-09 00:42:01.602626 | orchestrator |  { 2026-01-09 00:42:01.602636 | orchestrator |  "data": "osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca", 2026-01-09 00:42:01.602647 | orchestrator |  "data_vg": "ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca" 2026-01-09 00:42:01.602658 | orchestrator |  } 2026-01-09 00:42:01.602669 | orchestrator |  ] 2026-01-09 00:42:01.602680 | orchestrator |  } 2026-01-09 00:42:01.602723 | orchestrator | } 2026-01-09 00:42:01.602736 | orchestrator | 2026-01-09 00:42:01.602753 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-09 00:42:01.602768 | orchestrator | Friday 09 January 2026 00:41:59 +0000 (0:00:00.418) 0:00:13.589 ******** 2026-01-09 00:42:01.602779 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-09 00:42:01.602790 | orchestrator | 2026-01-09 00:42:01.602801 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-09 00:42:01.602812 | orchestrator | 2026-01-09 00:42:01.602823 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-09 00:42:01.602833 | orchestrator | Friday 09 January 2026 00:42:01 +0000 (0:00:01.886) 0:00:15.475 ******** 2026-01-09 00:42:01.602844 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-09 00:42:01.602855 | orchestrator | 2026-01-09 00:42:01.602866 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-09 00:42:01.602877 | orchestrator | Friday 09 January 2026 00:42:01 +0000 (0:00:00.240) 0:00:15.716 ******** 2026-01-09 00:42:01.602888 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:42:01.602898 | orchestrator | 2026-01-09 00:42:01.602918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.735512 | orchestrator | Friday 09 January 2026 00:42:01 +0000 (0:00:00.196) 0:00:15.912 ******** 2026-01-09 00:42:08.735638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-09 00:42:08.735654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-09 00:42:08.735666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-09 00:42:08.735678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-09 00:42:08.735689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-09 00:42:08.735754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-09 00:42:08.735766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-09 00:42:08.735797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-09 00:42:08.735809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-09 00:42:08.735820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-09 00:42:08.735831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-09 00:42:08.735848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-09 00:42:08.735860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-09 00:42:08.735871 | orchestrator | 2026-01-09 00:42:08.735883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.735895 | orchestrator | Friday 09 January 2026 00:42:01 +0000 (0:00:00.367) 0:00:16.279 ******** 2026-01-09 00:42:08.735906 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.735918 | orchestrator | 2026-01-09 00:42:08.735929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.735940 | orchestrator | Friday 09 January 2026 00:42:02 +0000 (0:00:00.204) 0:00:16.484 ******** 2026-01-09 00:42:08.735951 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.735962 | orchestrator | 2026-01-09 00:42:08.735973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.735984 | orchestrator | Friday 09 January 2026 00:42:02 +0000 (0:00:00.192) 0:00:16.677 ******** 2026-01-09 00:42:08.735995 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.736006 | orchestrator | 2026-01-09 00:42:08.736017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.736053 | orchestrator | Friday 09 January 2026 00:42:02 +0000 (0:00:00.171) 0:00:16.848 ******** 2026-01-09 00:42:08.736067 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.736080 | orchestrator | 2026-01-09 00:42:08.736093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.736106 | orchestrator | Friday 09 January 2026 00:42:02 +0000 (0:00:00.172) 0:00:17.021 ******** 2026-01-09 00:42:08.736118 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.736130 | orchestrator | 2026-01-09 00:42:08.736144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.736157 | orchestrator | Friday 09 January 2026 00:42:03 +0000 (0:00:00.501) 0:00:17.523 ******** 2026-01-09 00:42:08.736170 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.736182 | orchestrator | 2026-01-09 00:42:08.736196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.736209 | orchestrator | Friday 09 January 2026 00:42:03 +0000 (0:00:00.189) 0:00:17.712 ******** 2026-01-09 00:42:08.736222 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.736234 | orchestrator | 2026-01-09 00:42:08.736246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.736259 | orchestrator | Friday 09 January 2026 00:42:03 +0000 (0:00:00.172) 0:00:17.884 ******** 2026-01-09 00:42:08.736273 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.736286 | orchestrator | 2026-01-09 00:42:08.736298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.736311 | orchestrator | Friday 09 January 2026 00:42:03 +0000 (0:00:00.225) 0:00:18.110 ******** 2026-01-09 00:42:08.736324 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430) 2026-01-09 00:42:08.736339 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430) 2026-01-09 00:42:08.736351 | orchestrator | 2026-01-09 00:42:08.736364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.736377 | orchestrator | Friday 09 January 2026 00:42:04 +0000 (0:00:00.384) 0:00:18.495 ******** 2026-01-09 00:42:08.736390 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e) 2026-01-09 00:42:08.736402 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e) 2026-01-09 00:42:08.736416 | orchestrator | 2026-01-09 00:42:08.736427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.736438 | orchestrator | Friday 09 January 2026 00:42:04 +0000 (0:00:00.392) 0:00:18.887 ******** 2026-01-09 00:42:08.736448 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541) 2026-01-09 00:42:08.736459 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541) 2026-01-09 00:42:08.736470 | orchestrator | 2026-01-09 00:42:08.736481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.736508 | orchestrator | Friday 09 January 2026 00:42:04 +0000 (0:00:00.406) 0:00:19.293 ******** 2026-01-09 00:42:08.736519 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795) 2026-01-09 00:42:08.736530 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795) 2026-01-09 00:42:08.736541 | orchestrator | 2026-01-09 00:42:08.736559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:08.736570 | orchestrator | Friday 09 January 2026 00:42:05 +0000 (0:00:00.377) 0:00:19.671 ******** 2026-01-09 00:42:08.736581 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-09 00:42:08.736592 | orchestrator | 2026-01-09 00:42:08.736602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:08.736613 | orchestrator | Friday 09 January 2026 00:42:05 +0000 (0:00:00.296) 0:00:19.967 ******** 2026-01-09 00:42:08.736632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-09 00:42:08.736643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-09 00:42:08.736653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-09 00:42:08.736664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-09 00:42:08.736745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-09 00:42:08.736773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-09 00:42:08.736784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-09 00:42:08.736794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-09 00:42:08.736805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-09 00:42:08.736815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-09 00:42:08.736983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-09 00:42:08.736998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-09 00:42:08.737008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-09 00:42:08.737019 | orchestrator | 2026-01-09 00:42:08.737052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:08.737064 | orchestrator | Friday 09 January 2026 00:42:05 +0000 (0:00:00.338) 0:00:20.306 ******** 2026-01-09 00:42:08.737075 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.737085 | orchestrator | 2026-01-09 00:42:08.737096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:08.737107 | orchestrator | Friday 09 January 2026 00:42:06 +0000 (0:00:00.502) 0:00:20.808 ******** 2026-01-09 00:42:08.737118 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.737155 | orchestrator | 2026-01-09 00:42:08.737167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:08.737178 | orchestrator | Friday 09 January 2026 00:42:06 +0000 (0:00:00.165) 0:00:20.974 ******** 2026-01-09 00:42:08.737188 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.737199 | orchestrator | 2026-01-09 00:42:08.737210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:08.737221 | orchestrator | Friday 09 January 2026 00:42:06 +0000 (0:00:00.167) 0:00:21.141 ******** 2026-01-09 00:42:08.737231 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.737242 | orchestrator | 2026-01-09 00:42:08.737253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:08.737264 | orchestrator | Friday 09 January 2026 00:42:06 +0000 (0:00:00.164) 0:00:21.306 ******** 2026-01-09 00:42:08.737274 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.737285 | orchestrator | 2026-01-09 00:42:08.737296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:08.737307 | orchestrator | Friday 09 January 2026 00:42:07 +0000 (0:00:00.163) 0:00:21.469 ******** 2026-01-09 00:42:08.737317 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.737328 | orchestrator | 2026-01-09 00:42:08.737339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:08.737350 | orchestrator | Friday 09 January 2026 00:42:07 +0000 (0:00:00.171) 0:00:21.641 ******** 2026-01-09 00:42:08.737360 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.737371 | orchestrator | 2026-01-09 00:42:08.737382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:08.737392 | orchestrator | Friday 09 January 2026 00:42:07 +0000 (0:00:00.169) 0:00:21.810 ******** 2026-01-09 00:42:08.737413 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:08.737424 | orchestrator | 2026-01-09 00:42:08.737434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:08.737445 | orchestrator | Friday 09 January 2026 00:42:07 +0000 (0:00:00.192) 0:00:22.003 ******** 2026-01-09 00:42:08.737456 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-09 00:42:08.737468 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-09 00:42:08.737479 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-09 00:42:08.737489 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-09 00:42:08.737500 | orchestrator | 2026-01-09 00:42:08.737511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:08.737521 | orchestrator | Friday 09 January 2026 00:42:08 +0000 (0:00:00.813) 0:00:22.816 ******** 2026-01-09 00:42:08.737532 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829117 | orchestrator | 2026-01-09 00:42:14.829217 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:14.829230 | orchestrator | Friday 09 January 2026 00:42:08 +0000 (0:00:00.235) 0:00:23.051 ******** 2026-01-09 00:42:14.829240 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829249 | orchestrator | 2026-01-09 00:42:14.829257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:14.829282 | orchestrator | Friday 09 January 2026 00:42:08 +0000 (0:00:00.205) 0:00:23.257 ******** 2026-01-09 00:42:14.829291 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829299 | orchestrator | 2026-01-09 00:42:14.829307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:14.829315 | orchestrator | Friday 09 January 2026 00:42:09 +0000 (0:00:00.190) 0:00:23.447 ******** 2026-01-09 00:42:14.829323 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829331 | orchestrator | 2026-01-09 00:42:14.829339 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-09 00:42:14.829347 | orchestrator | Friday 09 January 2026 00:42:09 +0000 (0:00:00.693) 0:00:24.140 ******** 2026-01-09 00:42:14.829355 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-09 00:42:14.829363 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-09 00:42:14.829371 | orchestrator | 2026-01-09 00:42:14.829379 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-09 00:42:14.829387 | orchestrator | Friday 09 January 2026 00:42:09 +0000 (0:00:00.162) 0:00:24.303 ******** 2026-01-09 00:42:14.829394 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829403 | orchestrator | 2026-01-09 00:42:14.829411 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-09 00:42:14.829418 | orchestrator | Friday 09 January 2026 00:42:10 +0000 (0:00:00.107) 0:00:24.410 ******** 2026-01-09 00:42:14.829426 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829434 | orchestrator | 2026-01-09 00:42:14.829442 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-09 00:42:14.829450 | orchestrator | Friday 09 January 2026 00:42:10 +0000 (0:00:00.107) 0:00:24.517 ******** 2026-01-09 00:42:14.829457 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829465 | orchestrator | 2026-01-09 00:42:14.829473 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-09 00:42:14.829481 | orchestrator | Friday 09 January 2026 00:42:10 +0000 (0:00:00.131) 0:00:24.649 ******** 2026-01-09 00:42:14.829489 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:42:14.829498 | orchestrator | 2026-01-09 00:42:14.829506 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-09 00:42:14.829573 | orchestrator | Friday 09 January 2026 00:42:10 +0000 (0:00:00.153) 0:00:24.803 ******** 2026-01-09 00:42:14.829585 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2edbad7c-3e58-5742-8752-3a5bd5d561b5'}}) 2026-01-09 00:42:14.829593 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '209c90a3-928e-55d9-9ec8-b900c012dcc3'}}) 2026-01-09 00:42:14.829622 | orchestrator | 2026-01-09 00:42:14.829630 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-09 00:42:14.829639 | orchestrator | Friday 09 January 2026 00:42:10 +0000 (0:00:00.140) 0:00:24.943 ******** 2026-01-09 00:42:14.829647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2edbad7c-3e58-5742-8752-3a5bd5d561b5'}})  2026-01-09 00:42:14.829659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '209c90a3-928e-55d9-9ec8-b900c012dcc3'}})  2026-01-09 00:42:14.829668 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829677 | orchestrator | 2026-01-09 00:42:14.829686 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-09 00:42:14.829716 | orchestrator | Friday 09 January 2026 00:42:10 +0000 (0:00:00.112) 0:00:25.056 ******** 2026-01-09 00:42:14.829725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2edbad7c-3e58-5742-8752-3a5bd5d561b5'}})  2026-01-09 00:42:14.829735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '209c90a3-928e-55d9-9ec8-b900c012dcc3'}})  2026-01-09 00:42:14.829744 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829753 | orchestrator | 2026-01-09 00:42:14.829762 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-09 00:42:14.829771 | orchestrator | Friday 09 January 2026 00:42:10 +0000 (0:00:00.147) 0:00:25.204 ******** 2026-01-09 00:42:14.829781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2edbad7c-3e58-5742-8752-3a5bd5d561b5'}})  2026-01-09 00:42:14.829790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '209c90a3-928e-55d9-9ec8-b900c012dcc3'}})  2026-01-09 00:42:14.829799 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829808 | orchestrator | 2026-01-09 00:42:14.829818 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-09 00:42:14.829827 | orchestrator | Friday 09 January 2026 00:42:11 +0000 (0:00:00.131) 0:00:25.335 ******** 2026-01-09 00:42:14.829836 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:42:14.829844 | orchestrator | 2026-01-09 00:42:14.829853 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-09 00:42:14.829863 | orchestrator | Friday 09 January 2026 00:42:11 +0000 (0:00:00.127) 0:00:25.462 ******** 2026-01-09 00:42:14.829873 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:42:14.829882 | orchestrator | 2026-01-09 00:42:14.829891 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-09 00:42:14.829901 | orchestrator | Friday 09 January 2026 00:42:11 +0000 (0:00:00.159) 0:00:25.622 ******** 2026-01-09 00:42:14.829925 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829934 | orchestrator | 2026-01-09 00:42:14.829943 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-09 00:42:14.829952 | orchestrator | Friday 09 January 2026 00:42:11 +0000 (0:00:00.319) 0:00:25.942 ******** 2026-01-09 00:42:14.829962 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.829971 | orchestrator | 2026-01-09 00:42:14.829981 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-09 00:42:14.829991 | orchestrator | Friday 09 January 2026 00:42:11 +0000 (0:00:00.126) 0:00:26.068 ******** 2026-01-09 00:42:14.829999 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.830007 | orchestrator | 2026-01-09 00:42:14.830062 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-09 00:42:14.830071 | orchestrator | Friday 09 January 2026 00:42:11 +0000 (0:00:00.105) 0:00:26.174 ******** 2026-01-09 00:42:14.830078 | orchestrator | ok: [testbed-node-4] => { 2026-01-09 00:42:14.830086 | orchestrator |  "ceph_osd_devices": { 2026-01-09 00:42:14.830094 | orchestrator |  "sdb": { 2026-01-09 00:42:14.830103 | orchestrator |  "osd_lvm_uuid": "2edbad7c-3e58-5742-8752-3a5bd5d561b5" 2026-01-09 00:42:14.830118 | orchestrator |  }, 2026-01-09 00:42:14.830126 | orchestrator |  "sdc": { 2026-01-09 00:42:14.830141 | orchestrator |  "osd_lvm_uuid": "209c90a3-928e-55d9-9ec8-b900c012dcc3" 2026-01-09 00:42:14.830149 | orchestrator |  } 2026-01-09 00:42:14.830157 | orchestrator |  } 2026-01-09 00:42:14.830165 | orchestrator | } 2026-01-09 00:42:14.830174 | orchestrator | 2026-01-09 00:42:14.830182 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-09 00:42:14.830190 | orchestrator | Friday 09 January 2026 00:42:11 +0000 (0:00:00.107) 0:00:26.281 ******** 2026-01-09 00:42:14.830197 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.830205 | orchestrator | 2026-01-09 00:42:14.830213 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-09 00:42:14.830221 | orchestrator | Friday 09 January 2026 00:42:12 +0000 (0:00:00.137) 0:00:26.419 ******** 2026-01-09 00:42:14.830228 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.830236 | orchestrator | 2026-01-09 00:42:14.830244 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-09 00:42:14.830252 | orchestrator | Friday 09 January 2026 00:42:12 +0000 (0:00:00.122) 0:00:26.541 ******** 2026-01-09 00:42:14.830260 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:42:14.830267 | orchestrator | 2026-01-09 00:42:14.830275 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-09 00:42:14.830283 | orchestrator | Friday 09 January 2026 00:42:12 +0000 (0:00:00.105) 0:00:26.647 ******** 2026-01-09 00:42:14.830291 | orchestrator | changed: [testbed-node-4] => { 2026-01-09 00:42:14.830298 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-09 00:42:14.830306 | orchestrator |  "ceph_osd_devices": { 2026-01-09 00:42:14.830314 | orchestrator |  "sdb": { 2026-01-09 00:42:14.830326 | orchestrator |  "osd_lvm_uuid": "2edbad7c-3e58-5742-8752-3a5bd5d561b5" 2026-01-09 00:42:14.830334 | orchestrator |  }, 2026-01-09 00:42:14.830342 | orchestrator |  "sdc": { 2026-01-09 00:42:14.830350 | orchestrator |  "osd_lvm_uuid": "209c90a3-928e-55d9-9ec8-b900c012dcc3" 2026-01-09 00:42:14.830358 | orchestrator |  } 2026-01-09 00:42:14.830365 | orchestrator |  }, 2026-01-09 00:42:14.830373 | orchestrator |  "lvm_volumes": [ 2026-01-09 00:42:14.830381 | orchestrator |  { 2026-01-09 00:42:14.830389 | orchestrator |  "data": "osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5", 2026-01-09 00:42:14.830397 | orchestrator |  "data_vg": "ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5" 2026-01-09 00:42:14.830404 | orchestrator |  }, 2026-01-09 00:42:14.830412 | orchestrator |  { 2026-01-09 00:42:14.830420 | orchestrator |  "data": "osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3", 2026-01-09 00:42:14.830428 | orchestrator |  "data_vg": "ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3" 2026-01-09 00:42:14.830435 | orchestrator |  } 2026-01-09 00:42:14.830443 | orchestrator |  ] 2026-01-09 00:42:14.830451 | orchestrator |  } 2026-01-09 00:42:14.830458 | orchestrator | } 2026-01-09 00:42:14.830466 | orchestrator | 2026-01-09 00:42:14.830474 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-09 00:42:14.830482 | orchestrator | Friday 09 January 2026 00:42:12 +0000 (0:00:00.195) 0:00:26.843 ******** 2026-01-09 00:42:14.830490 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-09 00:42:14.830497 | orchestrator | 2026-01-09 00:42:14.830505 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-09 00:42:14.830513 | orchestrator | 2026-01-09 00:42:14.830520 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-09 00:42:14.830528 | orchestrator | Friday 09 January 2026 00:42:13 +0000 (0:00:01.071) 0:00:27.914 ******** 2026-01-09 00:42:14.830536 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-09 00:42:14.830544 | orchestrator | 2026-01-09 00:42:14.830552 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-09 00:42:14.830564 | orchestrator | Friday 09 January 2026 00:42:14 +0000 (0:00:00.585) 0:00:28.500 ******** 2026-01-09 00:42:14.830572 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:42:14.830580 | orchestrator | 2026-01-09 00:42:14.830588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:14.830595 | orchestrator | Friday 09 January 2026 00:42:14 +0000 (0:00:00.288) 0:00:28.788 ******** 2026-01-09 00:42:14.830603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-09 00:42:14.830611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-09 00:42:14.830619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-09 00:42:14.830627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-09 00:42:14.830634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-09 00:42:14.830647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-09 00:42:22.940098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-09 00:42:22.940198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-09 00:42:22.940208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-09 00:42:22.940215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-09 00:42:22.940225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-09 00:42:22.940236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-09 00:42:22.940247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-09 00:42:22.940261 | orchestrator | 2026-01-09 00:42:22.940273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940286 | orchestrator | Friday 09 January 2026 00:42:14 +0000 (0:00:00.351) 0:00:29.140 ******** 2026-01-09 00:42:22.940297 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.940307 | orchestrator | 2026-01-09 00:42:22.940319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940329 | orchestrator | Friday 09 January 2026 00:42:15 +0000 (0:00:00.205) 0:00:29.345 ******** 2026-01-09 00:42:22.940340 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.940351 | orchestrator | 2026-01-09 00:42:22.940362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940372 | orchestrator | Friday 09 January 2026 00:42:15 +0000 (0:00:00.182) 0:00:29.528 ******** 2026-01-09 00:42:22.940384 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.940396 | orchestrator | 2026-01-09 00:42:22.940406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940417 | orchestrator | Friday 09 January 2026 00:42:15 +0000 (0:00:00.183) 0:00:29.712 ******** 2026-01-09 00:42:22.940428 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.940440 | orchestrator | 2026-01-09 00:42:22.940452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940464 | orchestrator | Friday 09 January 2026 00:42:15 +0000 (0:00:00.185) 0:00:29.897 ******** 2026-01-09 00:42:22.940475 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.940483 | orchestrator | 2026-01-09 00:42:22.940490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940497 | orchestrator | Friday 09 January 2026 00:42:15 +0000 (0:00:00.202) 0:00:30.100 ******** 2026-01-09 00:42:22.940504 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.940512 | orchestrator | 2026-01-09 00:42:22.940534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940562 | orchestrator | Friday 09 January 2026 00:42:15 +0000 (0:00:00.194) 0:00:30.294 ******** 2026-01-09 00:42:22.940569 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.940575 | orchestrator | 2026-01-09 00:42:22.940582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940589 | orchestrator | Friday 09 January 2026 00:42:16 +0000 (0:00:00.188) 0:00:30.483 ******** 2026-01-09 00:42:22.940595 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.940602 | orchestrator | 2026-01-09 00:42:22.940609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940616 | orchestrator | Friday 09 January 2026 00:42:16 +0000 (0:00:00.201) 0:00:30.685 ******** 2026-01-09 00:42:22.940624 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b) 2026-01-09 00:42:22.940633 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b) 2026-01-09 00:42:22.940640 | orchestrator | 2026-01-09 00:42:22.940648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940656 | orchestrator | Friday 09 January 2026 00:42:17 +0000 (0:00:00.728) 0:00:31.413 ******** 2026-01-09 00:42:22.940664 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88) 2026-01-09 00:42:22.940671 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88) 2026-01-09 00:42:22.940679 | orchestrator | 2026-01-09 00:42:22.940713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940721 | orchestrator | Friday 09 January 2026 00:42:17 +0000 (0:00:00.414) 0:00:31.828 ******** 2026-01-09 00:42:22.940729 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338) 2026-01-09 00:42:22.940736 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338) 2026-01-09 00:42:22.940744 | orchestrator | 2026-01-09 00:42:22.940752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940759 | orchestrator | Friday 09 January 2026 00:42:17 +0000 (0:00:00.389) 0:00:32.218 ******** 2026-01-09 00:42:22.940768 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595) 2026-01-09 00:42:22.940775 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595) 2026-01-09 00:42:22.940783 | orchestrator | 2026-01-09 00:42:22.940790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:42:22.940798 | orchestrator | Friday 09 January 2026 00:42:18 +0000 (0:00:00.402) 0:00:32.620 ******** 2026-01-09 00:42:22.940806 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-09 00:42:22.940814 | orchestrator | 2026-01-09 00:42:22.940821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.940845 | orchestrator | Friday 09 January 2026 00:42:18 +0000 (0:00:00.312) 0:00:32.933 ******** 2026-01-09 00:42:22.940855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-09 00:42:22.940863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-09 00:42:22.940872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-09 00:42:22.940879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-09 00:42:22.940887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-09 00:42:22.940895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-09 00:42:22.940904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-09 00:42:22.940913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-09 00:42:22.940928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-09 00:42:22.940937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-09 00:42:22.940945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-09 00:42:22.940953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-09 00:42:22.940962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-09 00:42:22.940972 | orchestrator | 2026-01-09 00:42:22.940985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.940997 | orchestrator | Friday 09 January 2026 00:42:18 +0000 (0:00:00.337) 0:00:33.271 ******** 2026-01-09 00:42:22.941009 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941020 | orchestrator | 2026-01-09 00:42:22.941033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941044 | orchestrator | Friday 09 January 2026 00:42:19 +0000 (0:00:00.172) 0:00:33.443 ******** 2026-01-09 00:42:22.941055 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941065 | orchestrator | 2026-01-09 00:42:22.941077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941090 | orchestrator | Friday 09 January 2026 00:42:19 +0000 (0:00:00.171) 0:00:33.615 ******** 2026-01-09 00:42:22.941102 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941114 | orchestrator | 2026-01-09 00:42:22.941126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941139 | orchestrator | Friday 09 January 2026 00:42:19 +0000 (0:00:00.171) 0:00:33.786 ******** 2026-01-09 00:42:22.941151 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941163 | orchestrator | 2026-01-09 00:42:22.941176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941188 | orchestrator | Friday 09 January 2026 00:42:19 +0000 (0:00:00.191) 0:00:33.978 ******** 2026-01-09 00:42:22.941200 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941214 | orchestrator | 2026-01-09 00:42:22.941226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941239 | orchestrator | Friday 09 January 2026 00:42:19 +0000 (0:00:00.208) 0:00:34.187 ******** 2026-01-09 00:42:22.941251 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941263 | orchestrator | 2026-01-09 00:42:22.941275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941288 | orchestrator | Friday 09 January 2026 00:42:20 +0000 (0:00:00.750) 0:00:34.937 ******** 2026-01-09 00:42:22.941299 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941312 | orchestrator | 2026-01-09 00:42:22.941322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941334 | orchestrator | Friday 09 January 2026 00:42:20 +0000 (0:00:00.231) 0:00:35.169 ******** 2026-01-09 00:42:22.941346 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941358 | orchestrator | 2026-01-09 00:42:22.941371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941383 | orchestrator | Friday 09 January 2026 00:42:21 +0000 (0:00:00.272) 0:00:35.441 ******** 2026-01-09 00:42:22.941395 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-09 00:42:22.941406 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-09 00:42:22.941413 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-09 00:42:22.941421 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-09 00:42:22.941428 | orchestrator | 2026-01-09 00:42:22.941435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941443 | orchestrator | Friday 09 January 2026 00:42:21 +0000 (0:00:00.838) 0:00:36.280 ******** 2026-01-09 00:42:22.941450 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941465 | orchestrator | 2026-01-09 00:42:22.941472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941487 | orchestrator | Friday 09 January 2026 00:42:22 +0000 (0:00:00.334) 0:00:36.614 ******** 2026-01-09 00:42:22.941495 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941502 | orchestrator | 2026-01-09 00:42:22.941509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941517 | orchestrator | Friday 09 January 2026 00:42:22 +0000 (0:00:00.231) 0:00:36.846 ******** 2026-01-09 00:42:22.941524 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941531 | orchestrator | 2026-01-09 00:42:22.941539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:42:22.941552 | orchestrator | Friday 09 January 2026 00:42:22 +0000 (0:00:00.206) 0:00:37.052 ******** 2026-01-09 00:42:22.941564 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:22.941576 | orchestrator | 2026-01-09 00:42:22.941599 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-09 00:42:27.348228 | orchestrator | Friday 09 January 2026 00:42:22 +0000 (0:00:00.199) 0:00:37.252 ******** 2026-01-09 00:42:27.348359 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-09 00:42:27.348377 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-09 00:42:27.348388 | orchestrator | 2026-01-09 00:42:27.348398 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-09 00:42:27.349213 | orchestrator | Friday 09 January 2026 00:42:23 +0000 (0:00:00.234) 0:00:37.487 ******** 2026-01-09 00:42:27.349233 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349240 | orchestrator | 2026-01-09 00:42:27.349247 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-09 00:42:27.349252 | orchestrator | Friday 09 January 2026 00:42:23 +0000 (0:00:00.159) 0:00:37.647 ******** 2026-01-09 00:42:27.349257 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349262 | orchestrator | 2026-01-09 00:42:27.349267 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-09 00:42:27.349273 | orchestrator | Friday 09 January 2026 00:42:23 +0000 (0:00:00.128) 0:00:37.775 ******** 2026-01-09 00:42:27.349278 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349282 | orchestrator | 2026-01-09 00:42:27.349288 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-09 00:42:27.349292 | orchestrator | Friday 09 January 2026 00:42:23 +0000 (0:00:00.354) 0:00:38.130 ******** 2026-01-09 00:42:27.349297 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:42:27.349303 | orchestrator | 2026-01-09 00:42:27.349309 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-09 00:42:27.349314 | orchestrator | Friday 09 January 2026 00:42:23 +0000 (0:00:00.149) 0:00:38.280 ******** 2026-01-09 00:42:27.349319 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11533966-1bdf-5daf-a468-949db0b9bc1b'}}) 2026-01-09 00:42:27.349325 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'}}) 2026-01-09 00:42:27.349330 | orchestrator | 2026-01-09 00:42:27.349335 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-09 00:42:27.349340 | orchestrator | Friday 09 January 2026 00:42:24 +0000 (0:00:00.172) 0:00:38.452 ******** 2026-01-09 00:42:27.349345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11533966-1bdf-5daf-a468-949db0b9bc1b'}})  2026-01-09 00:42:27.349367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'}})  2026-01-09 00:42:27.349372 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349377 | orchestrator | 2026-01-09 00:42:27.349382 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-09 00:42:27.349387 | orchestrator | Friday 09 January 2026 00:42:24 +0000 (0:00:00.146) 0:00:38.599 ******** 2026-01-09 00:42:27.349410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11533966-1bdf-5daf-a468-949db0b9bc1b'}})  2026-01-09 00:42:27.349415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'}})  2026-01-09 00:42:27.349420 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349425 | orchestrator | 2026-01-09 00:42:27.349430 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-09 00:42:27.349434 | orchestrator | Friday 09 January 2026 00:42:24 +0000 (0:00:00.148) 0:00:38.747 ******** 2026-01-09 00:42:27.349439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11533966-1bdf-5daf-a468-949db0b9bc1b'}})  2026-01-09 00:42:27.349444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'}})  2026-01-09 00:42:27.349449 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349454 | orchestrator | 2026-01-09 00:42:27.349459 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-09 00:42:27.349464 | orchestrator | Friday 09 January 2026 00:42:24 +0000 (0:00:00.170) 0:00:38.918 ******** 2026-01-09 00:42:27.349468 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:42:27.349473 | orchestrator | 2026-01-09 00:42:27.349478 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-09 00:42:27.349483 | orchestrator | Friday 09 January 2026 00:42:24 +0000 (0:00:00.165) 0:00:39.084 ******** 2026-01-09 00:42:27.349488 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:42:27.349492 | orchestrator | 2026-01-09 00:42:27.349497 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-09 00:42:27.349502 | orchestrator | Friday 09 January 2026 00:42:24 +0000 (0:00:00.134) 0:00:39.218 ******** 2026-01-09 00:42:27.349507 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349511 | orchestrator | 2026-01-09 00:42:27.349516 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-09 00:42:27.349521 | orchestrator | Friday 09 January 2026 00:42:25 +0000 (0:00:00.126) 0:00:39.344 ******** 2026-01-09 00:42:27.349526 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349531 | orchestrator | 2026-01-09 00:42:27.349535 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-09 00:42:27.349540 | orchestrator | Friday 09 January 2026 00:42:25 +0000 (0:00:00.128) 0:00:39.472 ******** 2026-01-09 00:42:27.349545 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349550 | orchestrator | 2026-01-09 00:42:27.349555 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-09 00:42:27.349559 | orchestrator | Friday 09 January 2026 00:42:25 +0000 (0:00:00.125) 0:00:39.598 ******** 2026-01-09 00:42:27.349573 | orchestrator | ok: [testbed-node-5] => { 2026-01-09 00:42:27.349579 | orchestrator |  "ceph_osd_devices": { 2026-01-09 00:42:27.349591 | orchestrator |  "sdb": { 2026-01-09 00:42:27.349614 | orchestrator |  "osd_lvm_uuid": "11533966-1bdf-5daf-a468-949db0b9bc1b" 2026-01-09 00:42:27.349620 | orchestrator |  }, 2026-01-09 00:42:27.349625 | orchestrator |  "sdc": { 2026-01-09 00:42:27.349630 | orchestrator |  "osd_lvm_uuid": "aa3bcdda-c0e8-51aa-8164-bd5963cdd10f" 2026-01-09 00:42:27.349635 | orchestrator |  } 2026-01-09 00:42:27.349640 | orchestrator |  } 2026-01-09 00:42:27.349645 | orchestrator | } 2026-01-09 00:42:27.349650 | orchestrator | 2026-01-09 00:42:27.349655 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-09 00:42:27.349660 | orchestrator | Friday 09 January 2026 00:42:25 +0000 (0:00:00.133) 0:00:39.731 ******** 2026-01-09 00:42:27.349665 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349669 | orchestrator | 2026-01-09 00:42:27.349674 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-09 00:42:27.349679 | orchestrator | Friday 09 January 2026 00:42:25 +0000 (0:00:00.393) 0:00:40.125 ******** 2026-01-09 00:42:27.349705 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349713 | orchestrator | 2026-01-09 00:42:27.349718 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-09 00:42:27.349723 | orchestrator | Friday 09 January 2026 00:42:25 +0000 (0:00:00.150) 0:00:40.275 ******** 2026-01-09 00:42:27.349728 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:42:27.349733 | orchestrator | 2026-01-09 00:42:27.349738 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-09 00:42:27.349743 | orchestrator | Friday 09 January 2026 00:42:26 +0000 (0:00:00.159) 0:00:40.435 ******** 2026-01-09 00:42:27.349748 | orchestrator | changed: [testbed-node-5] => { 2026-01-09 00:42:27.349753 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-09 00:42:27.349758 | orchestrator |  "ceph_osd_devices": { 2026-01-09 00:42:27.349763 | orchestrator |  "sdb": { 2026-01-09 00:42:27.349767 | orchestrator |  "osd_lvm_uuid": "11533966-1bdf-5daf-a468-949db0b9bc1b" 2026-01-09 00:42:27.349772 | orchestrator |  }, 2026-01-09 00:42:27.349778 | orchestrator |  "sdc": { 2026-01-09 00:42:27.349782 | orchestrator |  "osd_lvm_uuid": "aa3bcdda-c0e8-51aa-8164-bd5963cdd10f" 2026-01-09 00:42:27.349787 | orchestrator |  } 2026-01-09 00:42:27.349792 | orchestrator |  }, 2026-01-09 00:42:27.349797 | orchestrator |  "lvm_volumes": [ 2026-01-09 00:42:27.349802 | orchestrator |  { 2026-01-09 00:42:27.349807 | orchestrator |  "data": "osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b", 2026-01-09 00:42:27.349811 | orchestrator |  "data_vg": "ceph-11533966-1bdf-5daf-a468-949db0b9bc1b" 2026-01-09 00:42:27.349886 | orchestrator |  }, 2026-01-09 00:42:27.349891 | orchestrator |  { 2026-01-09 00:42:27.349897 | orchestrator |  "data": "osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f", 2026-01-09 00:42:27.349909 | orchestrator |  "data_vg": "ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f" 2026-01-09 00:42:27.349914 | orchestrator |  } 2026-01-09 00:42:27.349923 | orchestrator |  ] 2026-01-09 00:42:27.349928 | orchestrator |  } 2026-01-09 00:42:27.349933 | orchestrator | } 2026-01-09 00:42:27.349938 | orchestrator | 2026-01-09 00:42:27.349943 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-09 00:42:27.349948 | orchestrator | Friday 09 January 2026 00:42:26 +0000 (0:00:00.221) 0:00:40.656 ******** 2026-01-09 00:42:27.349952 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-09 00:42:27.349957 | orchestrator | 2026-01-09 00:42:27.349962 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:42:27.349968 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-09 00:42:27.349975 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-09 00:42:27.349980 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-09 00:42:27.349985 | orchestrator | 2026-01-09 00:42:27.349990 | orchestrator | 2026-01-09 00:42:27.349994 | orchestrator | 2026-01-09 00:42:27.349999 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:42:27.350004 | orchestrator | Friday 09 January 2026 00:42:27 +0000 (0:00:00.979) 0:00:41.636 ******** 2026-01-09 00:42:27.350009 | orchestrator | =============================================================================== 2026-01-09 00:42:27.350048 | orchestrator | Write configuration file ------------------------------------------------ 3.94s 2026-01-09 00:42:27.350054 | orchestrator | Add known links to the list of available block devices ------------------ 1.19s 2026-01-09 00:42:27.350058 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2026-01-09 00:42:27.350063 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.11s 2026-01-09 00:42:27.350074 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2026-01-09 00:42:27.350078 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-01-09 00:42:27.350083 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-01-09 00:42:27.350088 | orchestrator | Print configuration data ------------------------------------------------ 0.83s 2026-01-09 00:42:27.350093 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2026-01-09 00:42:27.350098 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-01-09 00:42:27.350102 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-01-09 00:42:27.350107 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-01-09 00:42:27.350112 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-01-09 00:42:27.350124 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-01-09 00:42:27.776296 | orchestrator | Print WAL devices ------------------------------------------------------- 0.69s 2026-01-09 00:42:27.776376 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.67s 2026-01-09 00:42:27.776382 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.66s 2026-01-09 00:42:27.776386 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-01-09 00:42:27.776391 | orchestrator | Set DB devices config data ---------------------------------------------- 0.59s 2026-01-09 00:42:27.776395 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.58s 2026-01-09 00:42:50.617088 | orchestrator | 2026-01-09 00:42:50 | INFO  | Task 05a1e5b1-053d-4f40-9e6e-4ba252de1117 (sync inventory) is running in background. Output coming soon. 2026-01-09 00:43:18.535414 | orchestrator | 2026-01-09 00:42:52 | INFO  | Starting group_vars file reorganization 2026-01-09 00:43:18.535549 | orchestrator | 2026-01-09 00:42:52 | INFO  | Moved 0 file(s) to their respective directories 2026-01-09 00:43:18.535561 | orchestrator | 2026-01-09 00:42:52 | INFO  | Group_vars file reorganization completed 2026-01-09 00:43:18.535567 | orchestrator | 2026-01-09 00:42:55 | INFO  | Starting variable preparation from inventory 2026-01-09 00:43:18.535576 | orchestrator | 2026-01-09 00:42:58 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-09 00:43:18.535584 | orchestrator | 2026-01-09 00:42:58 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-09 00:43:18.535589 | orchestrator | 2026-01-09 00:42:58 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-09 00:43:18.535594 | orchestrator | 2026-01-09 00:42:58 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-09 00:43:18.535601 | orchestrator | 2026-01-09 00:42:58 | INFO  | Variable preparation completed 2026-01-09 00:43:18.535606 | orchestrator | 2026-01-09 00:43:00 | INFO  | Starting inventory overwrite handling 2026-01-09 00:43:18.535612 | orchestrator | 2026-01-09 00:43:00 | INFO  | Handling group overwrites in 99-overwrite 2026-01-09 00:43:18.535617 | orchestrator | 2026-01-09 00:43:00 | INFO  | Removing group frr:children from 60-generic 2026-01-09 00:43:18.535622 | orchestrator | 2026-01-09 00:43:00 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-09 00:43:18.535627 | orchestrator | 2026-01-09 00:43:00 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-09 00:43:18.535632 | orchestrator | 2026-01-09 00:43:00 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-09 00:43:18.536552 | orchestrator | 2026-01-09 00:43:00 | INFO  | Handling group overwrites in 20-roles 2026-01-09 00:43:18.536619 | orchestrator | 2026-01-09 00:43:00 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-09 00:43:18.536631 | orchestrator | 2026-01-09 00:43:00 | INFO  | Removed 5 group(s) in total 2026-01-09 00:43:18.536640 | orchestrator | 2026-01-09 00:43:00 | INFO  | Inventory overwrite handling completed 2026-01-09 00:43:18.536676 | orchestrator | 2026-01-09 00:43:01 | INFO  | Starting merge of inventory files 2026-01-09 00:43:18.536683 | orchestrator | 2026-01-09 00:43:01 | INFO  | Inventory files merged successfully 2026-01-09 00:43:18.536688 | orchestrator | 2026-01-09 00:43:06 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-09 00:43:18.536700 | orchestrator | 2026-01-09 00:43:17 | INFO  | Successfully wrote ClusterShell configuration 2026-01-09 00:43:18.536706 | orchestrator | [master 5665fcd] 2026-01-09-00-43 2026-01-09 00:43:18.536713 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-09 00:43:20.639232 | orchestrator | 2026-01-09 00:43:20 | INFO  | Task 0442adc0-3d63-402c-89ae-3784de0950a7 (ceph-create-lvm-devices) was prepared for execution. 2026-01-09 00:43:20.639347 | orchestrator | 2026-01-09 00:43:20 | INFO  | It takes a moment until task 0442adc0-3d63-402c-89ae-3784de0950a7 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-09 00:43:33.293040 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-09 00:43:33.293128 | orchestrator | 2.16.14 2026-01-09 00:43:33.293135 | orchestrator | 2026-01-09 00:43:33.293141 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-09 00:43:33.293146 | orchestrator | 2026-01-09 00:43:33.293150 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-09 00:43:33.293155 | orchestrator | Friday 09 January 2026 00:43:25 +0000 (0:00:00.392) 0:00:00.392 ******** 2026-01-09 00:43:33.293160 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-09 00:43:33.293164 | orchestrator | 2026-01-09 00:43:33.293168 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-09 00:43:33.293172 | orchestrator | Friday 09 January 2026 00:43:25 +0000 (0:00:00.244) 0:00:00.636 ******** 2026-01-09 00:43:33.293176 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:43:33.293180 | orchestrator | 2026-01-09 00:43:33.293185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293189 | orchestrator | Friday 09 January 2026 00:43:25 +0000 (0:00:00.189) 0:00:00.826 ******** 2026-01-09 00:43:33.293193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-09 00:43:33.293197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-09 00:43:33.293201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-09 00:43:33.293205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-09 00:43:33.293208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-09 00:43:33.293212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-09 00:43:33.293216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-09 00:43:33.293220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-09 00:43:33.293224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-09 00:43:33.293241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-09 00:43:33.293245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-09 00:43:33.293249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-09 00:43:33.293268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-09 00:43:33.293272 | orchestrator | 2026-01-09 00:43:33.293276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293279 | orchestrator | Friday 09 January 2026 00:43:26 +0000 (0:00:00.442) 0:00:01.269 ******** 2026-01-09 00:43:33.293283 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293287 | orchestrator | 2026-01-09 00:43:33.293291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293295 | orchestrator | Friday 09 January 2026 00:43:26 +0000 (0:00:00.245) 0:00:01.515 ******** 2026-01-09 00:43:33.293299 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293302 | orchestrator | 2026-01-09 00:43:33.293309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293313 | orchestrator | Friday 09 January 2026 00:43:26 +0000 (0:00:00.243) 0:00:01.759 ******** 2026-01-09 00:43:33.293317 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293321 | orchestrator | 2026-01-09 00:43:33.293324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293328 | orchestrator | Friday 09 January 2026 00:43:27 +0000 (0:00:00.188) 0:00:01.947 ******** 2026-01-09 00:43:33.293332 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293336 | orchestrator | 2026-01-09 00:43:33.293339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293343 | orchestrator | Friday 09 January 2026 00:43:27 +0000 (0:00:00.197) 0:00:02.145 ******** 2026-01-09 00:43:33.293347 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293351 | orchestrator | 2026-01-09 00:43:33.293355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293358 | orchestrator | Friday 09 January 2026 00:43:27 +0000 (0:00:00.230) 0:00:02.376 ******** 2026-01-09 00:43:33.293362 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293366 | orchestrator | 2026-01-09 00:43:33.293370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293374 | orchestrator | Friday 09 January 2026 00:43:27 +0000 (0:00:00.182) 0:00:02.559 ******** 2026-01-09 00:43:33.293377 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293381 | orchestrator | 2026-01-09 00:43:33.293385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293389 | orchestrator | Friday 09 January 2026 00:43:27 +0000 (0:00:00.173) 0:00:02.732 ******** 2026-01-09 00:43:33.293393 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293396 | orchestrator | 2026-01-09 00:43:33.293400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293404 | orchestrator | Friday 09 January 2026 00:43:28 +0000 (0:00:00.208) 0:00:02.941 ******** 2026-01-09 00:43:33.293408 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7) 2026-01-09 00:43:33.293414 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7) 2026-01-09 00:43:33.293418 | orchestrator | 2026-01-09 00:43:33.293422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293437 | orchestrator | Friday 09 January 2026 00:43:28 +0000 (0:00:00.453) 0:00:03.395 ******** 2026-01-09 00:43:33.293441 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766) 2026-01-09 00:43:33.293445 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766) 2026-01-09 00:43:33.293449 | orchestrator | 2026-01-09 00:43:33.293452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293456 | orchestrator | Friday 09 January 2026 00:43:29 +0000 (0:00:00.662) 0:00:04.057 ******** 2026-01-09 00:43:33.293460 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8) 2026-01-09 00:43:33.293468 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8) 2026-01-09 00:43:33.293472 | orchestrator | 2026-01-09 00:43:33.293475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293479 | orchestrator | Friday 09 January 2026 00:43:29 +0000 (0:00:00.687) 0:00:04.745 ******** 2026-01-09 00:43:33.293483 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d) 2026-01-09 00:43:33.293487 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d) 2026-01-09 00:43:33.293491 | orchestrator | 2026-01-09 00:43:33.293494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:33.293498 | orchestrator | Friday 09 January 2026 00:43:30 +0000 (0:00:00.957) 0:00:05.703 ******** 2026-01-09 00:43:33.293502 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-09 00:43:33.293506 | orchestrator | 2026-01-09 00:43:33.293509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:33.293513 | orchestrator | Friday 09 January 2026 00:43:31 +0000 (0:00:00.363) 0:00:06.066 ******** 2026-01-09 00:43:33.293517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-09 00:43:33.293521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-09 00:43:33.293525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-09 00:43:33.293529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-09 00:43:33.293532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-09 00:43:33.293536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-09 00:43:33.293540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-09 00:43:33.293544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-09 00:43:33.293547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-09 00:43:33.293551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-09 00:43:33.293555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-09 00:43:33.293559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-09 00:43:33.293562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-09 00:43:33.293566 | orchestrator | 2026-01-09 00:43:33.293570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:33.293574 | orchestrator | Friday 09 January 2026 00:43:31 +0000 (0:00:00.457) 0:00:06.524 ******** 2026-01-09 00:43:33.293577 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293581 | orchestrator | 2026-01-09 00:43:33.293586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:33.293591 | orchestrator | Friday 09 January 2026 00:43:31 +0000 (0:00:00.238) 0:00:06.763 ******** 2026-01-09 00:43:33.293595 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293600 | orchestrator | 2026-01-09 00:43:33.293604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:33.293609 | orchestrator | Friday 09 January 2026 00:43:32 +0000 (0:00:00.244) 0:00:07.007 ******** 2026-01-09 00:43:33.293613 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293618 | orchestrator | 2026-01-09 00:43:33.293622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:33.293627 | orchestrator | Friday 09 January 2026 00:43:32 +0000 (0:00:00.254) 0:00:07.261 ******** 2026-01-09 00:43:33.293631 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293638 | orchestrator | 2026-01-09 00:43:33.293643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:33.293647 | orchestrator | Friday 09 January 2026 00:43:32 +0000 (0:00:00.238) 0:00:07.500 ******** 2026-01-09 00:43:33.293691 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293695 | orchestrator | 2026-01-09 00:43:33.293700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:33.293705 | orchestrator | Friday 09 January 2026 00:43:32 +0000 (0:00:00.231) 0:00:07.732 ******** 2026-01-09 00:43:33.293709 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293713 | orchestrator | 2026-01-09 00:43:33.293718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:33.293722 | orchestrator | Friday 09 January 2026 00:43:33 +0000 (0:00:00.223) 0:00:07.956 ******** 2026-01-09 00:43:33.293727 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:33.293732 | orchestrator | 2026-01-09 00:43:33.293739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:42.295439 | orchestrator | Friday 09 January 2026 00:43:33 +0000 (0:00:00.250) 0:00:08.206 ******** 2026-01-09 00:43:42.295528 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295536 | orchestrator | 2026-01-09 00:43:42.295542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:42.295547 | orchestrator | Friday 09 January 2026 00:43:33 +0000 (0:00:00.231) 0:00:08.437 ******** 2026-01-09 00:43:42.295552 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-09 00:43:42.295557 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-09 00:43:42.295562 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-09 00:43:42.295566 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-09 00:43:42.295570 | orchestrator | 2026-01-09 00:43:42.295575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:42.295579 | orchestrator | Friday 09 January 2026 00:43:34 +0000 (0:00:01.292) 0:00:09.729 ******** 2026-01-09 00:43:42.295583 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295587 | orchestrator | 2026-01-09 00:43:42.295591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:42.295595 | orchestrator | Friday 09 January 2026 00:43:35 +0000 (0:00:00.274) 0:00:10.004 ******** 2026-01-09 00:43:42.295599 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295603 | orchestrator | 2026-01-09 00:43:42.295608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:42.295612 | orchestrator | Friday 09 January 2026 00:43:35 +0000 (0:00:00.235) 0:00:10.239 ******** 2026-01-09 00:43:42.295616 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295620 | orchestrator | 2026-01-09 00:43:42.295624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:43:42.295628 | orchestrator | Friday 09 January 2026 00:43:35 +0000 (0:00:00.231) 0:00:10.471 ******** 2026-01-09 00:43:42.295632 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295636 | orchestrator | 2026-01-09 00:43:42.295640 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-09 00:43:42.295682 | orchestrator | Friday 09 January 2026 00:43:35 +0000 (0:00:00.233) 0:00:10.705 ******** 2026-01-09 00:43:42.295688 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295692 | orchestrator | 2026-01-09 00:43:42.295696 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-09 00:43:42.295700 | orchestrator | Friday 09 January 2026 00:43:35 +0000 (0:00:00.153) 0:00:10.858 ******** 2026-01-09 00:43:42.295718 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8cf949ba-669c-5e80-aece-22faa35a4e96'}}) 2026-01-09 00:43:42.295723 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '827da1a7-5d25-503a-baf6-83b57b40e5ca'}}) 2026-01-09 00:43:42.295727 | orchestrator | 2026-01-09 00:43:42.295731 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-09 00:43:42.295749 | orchestrator | Friday 09 January 2026 00:43:36 +0000 (0:00:00.260) 0:00:11.119 ******** 2026-01-09 00:43:42.295754 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'}) 2026-01-09 00:43:42.295760 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'}) 2026-01-09 00:43:42.295764 | orchestrator | 2026-01-09 00:43:42.295771 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-09 00:43:42.295775 | orchestrator | Friday 09 January 2026 00:43:38 +0000 (0:00:02.169) 0:00:13.288 ******** 2026-01-09 00:43:42.295779 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:42.295784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:42.295788 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295792 | orchestrator | 2026-01-09 00:43:42.295796 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-09 00:43:42.295800 | orchestrator | Friday 09 January 2026 00:43:38 +0000 (0:00:00.186) 0:00:13.474 ******** 2026-01-09 00:43:42.295804 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'}) 2026-01-09 00:43:42.295808 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'}) 2026-01-09 00:43:42.295812 | orchestrator | 2026-01-09 00:43:42.295817 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-09 00:43:42.295820 | orchestrator | Friday 09 January 2026 00:43:40 +0000 (0:00:01.467) 0:00:14.942 ******** 2026-01-09 00:43:42.295824 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:42.295828 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:42.295832 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295836 | orchestrator | 2026-01-09 00:43:42.295840 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-09 00:43:42.295844 | orchestrator | Friday 09 January 2026 00:43:40 +0000 (0:00:00.191) 0:00:15.134 ******** 2026-01-09 00:43:42.295860 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295864 | orchestrator | 2026-01-09 00:43:42.295868 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-09 00:43:42.295872 | orchestrator | Friday 09 January 2026 00:43:40 +0000 (0:00:00.143) 0:00:15.278 ******** 2026-01-09 00:43:42.295876 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:42.295880 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:42.295884 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295888 | orchestrator | 2026-01-09 00:43:42.295892 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-09 00:43:42.295896 | orchestrator | Friday 09 January 2026 00:43:40 +0000 (0:00:00.458) 0:00:15.736 ******** 2026-01-09 00:43:42.295900 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295903 | orchestrator | 2026-01-09 00:43:42.295907 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-09 00:43:42.295911 | orchestrator | Friday 09 January 2026 00:43:40 +0000 (0:00:00.175) 0:00:15.912 ******** 2026-01-09 00:43:42.295919 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:42.295923 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:42.295927 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295931 | orchestrator | 2026-01-09 00:43:42.295935 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-09 00:43:42.295939 | orchestrator | Friday 09 January 2026 00:43:41 +0000 (0:00:00.171) 0:00:16.083 ******** 2026-01-09 00:43:42.295943 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295947 | orchestrator | 2026-01-09 00:43:42.295951 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-09 00:43:42.295955 | orchestrator | Friday 09 January 2026 00:43:41 +0000 (0:00:00.179) 0:00:16.263 ******** 2026-01-09 00:43:42.295959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:42.295963 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:42.295967 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.295971 | orchestrator | 2026-01-09 00:43:42.295975 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-09 00:43:42.295980 | orchestrator | Friday 09 January 2026 00:43:41 +0000 (0:00:00.159) 0:00:16.423 ******** 2026-01-09 00:43:42.295984 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:43:42.295989 | orchestrator | 2026-01-09 00:43:42.295994 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-09 00:43:42.295999 | orchestrator | Friday 09 January 2026 00:43:41 +0000 (0:00:00.151) 0:00:16.574 ******** 2026-01-09 00:43:42.296007 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:42.296011 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:42.296016 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.296021 | orchestrator | 2026-01-09 00:43:42.296026 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-09 00:43:42.296030 | orchestrator | Friday 09 January 2026 00:43:41 +0000 (0:00:00.168) 0:00:16.742 ******** 2026-01-09 00:43:42.296035 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:42.296040 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:42.296044 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.296049 | orchestrator | 2026-01-09 00:43:42.296053 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-09 00:43:42.296058 | orchestrator | Friday 09 January 2026 00:43:41 +0000 (0:00:00.161) 0:00:16.904 ******** 2026-01-09 00:43:42.296062 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:42.296067 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:42.296072 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.296076 | orchestrator | 2026-01-09 00:43:42.296081 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-09 00:43:42.296089 | orchestrator | Friday 09 January 2026 00:43:42 +0000 (0:00:00.167) 0:00:17.071 ******** 2026-01-09 00:43:42.296094 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:42.296098 | orchestrator | 2026-01-09 00:43:42.296103 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-09 00:43:42.296111 | orchestrator | Friday 09 January 2026 00:43:42 +0000 (0:00:00.139) 0:00:17.210 ******** 2026-01-09 00:43:49.503365 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.503456 | orchestrator | 2026-01-09 00:43:49.503464 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-09 00:43:49.503471 | orchestrator | Friday 09 January 2026 00:43:42 +0000 (0:00:00.150) 0:00:17.361 ******** 2026-01-09 00:43:49.503476 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.503480 | orchestrator | 2026-01-09 00:43:49.503485 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-09 00:43:49.503490 | orchestrator | Friday 09 January 2026 00:43:42 +0000 (0:00:00.148) 0:00:17.510 ******** 2026-01-09 00:43:49.503494 | orchestrator | ok: [testbed-node-3] => { 2026-01-09 00:43:49.503500 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-09 00:43:49.503504 | orchestrator | } 2026-01-09 00:43:49.503509 | orchestrator | 2026-01-09 00:43:49.503552 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-09 00:43:49.503557 | orchestrator | Friday 09 January 2026 00:43:43 +0000 (0:00:00.441) 0:00:17.952 ******** 2026-01-09 00:43:49.503562 | orchestrator | ok: [testbed-node-3] => { 2026-01-09 00:43:49.503567 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-09 00:43:49.503571 | orchestrator | } 2026-01-09 00:43:49.503576 | orchestrator | 2026-01-09 00:43:49.503580 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-09 00:43:49.503585 | orchestrator | Friday 09 January 2026 00:43:43 +0000 (0:00:00.190) 0:00:18.143 ******** 2026-01-09 00:43:49.503590 | orchestrator | ok: [testbed-node-3] => { 2026-01-09 00:43:49.503595 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-09 00:43:49.503599 | orchestrator | } 2026-01-09 00:43:49.503604 | orchestrator | 2026-01-09 00:43:49.503608 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-09 00:43:49.503613 | orchestrator | Friday 09 January 2026 00:43:43 +0000 (0:00:00.190) 0:00:18.333 ******** 2026-01-09 00:43:49.503617 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:43:49.503622 | orchestrator | 2026-01-09 00:43:49.503626 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-09 00:43:49.503630 | orchestrator | Friday 09 January 2026 00:43:44 +0000 (0:00:00.768) 0:00:19.101 ******** 2026-01-09 00:43:49.503635 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:43:49.503639 | orchestrator | 2026-01-09 00:43:49.503695 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-09 00:43:49.503703 | orchestrator | Friday 09 January 2026 00:43:44 +0000 (0:00:00.541) 0:00:19.643 ******** 2026-01-09 00:43:49.503711 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:43:49.503716 | orchestrator | 2026-01-09 00:43:49.503720 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-09 00:43:49.503725 | orchestrator | Friday 09 January 2026 00:43:45 +0000 (0:00:00.609) 0:00:20.252 ******** 2026-01-09 00:43:49.503729 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:43:49.503734 | orchestrator | 2026-01-09 00:43:49.503738 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-09 00:43:49.503743 | orchestrator | Friday 09 January 2026 00:43:45 +0000 (0:00:00.171) 0:00:20.424 ******** 2026-01-09 00:43:49.503747 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.503752 | orchestrator | 2026-01-09 00:43:49.503756 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-09 00:43:49.503761 | orchestrator | Friday 09 January 2026 00:43:45 +0000 (0:00:00.134) 0:00:20.559 ******** 2026-01-09 00:43:49.503765 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.503770 | orchestrator | 2026-01-09 00:43:49.503774 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-09 00:43:49.503796 | orchestrator | Friday 09 January 2026 00:43:45 +0000 (0:00:00.134) 0:00:20.693 ******** 2026-01-09 00:43:49.503801 | orchestrator | ok: [testbed-node-3] => { 2026-01-09 00:43:49.503806 | orchestrator |  "vgs_report": { 2026-01-09 00:43:49.503810 | orchestrator |  "vg": [] 2026-01-09 00:43:49.503815 | orchestrator |  } 2026-01-09 00:43:49.503820 | orchestrator | } 2026-01-09 00:43:49.503824 | orchestrator | 2026-01-09 00:43:49.503829 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-09 00:43:49.503833 | orchestrator | Friday 09 January 2026 00:43:45 +0000 (0:00:00.193) 0:00:20.887 ******** 2026-01-09 00:43:49.503837 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.503842 | orchestrator | 2026-01-09 00:43:49.503865 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-09 00:43:49.503883 | orchestrator | Friday 09 January 2026 00:43:46 +0000 (0:00:00.177) 0:00:21.064 ******** 2026-01-09 00:43:49.503887 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.503892 | orchestrator | 2026-01-09 00:43:49.503897 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-09 00:43:49.503902 | orchestrator | Friday 09 January 2026 00:43:46 +0000 (0:00:00.167) 0:00:21.232 ******** 2026-01-09 00:43:49.503907 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.503912 | orchestrator | 2026-01-09 00:43:49.503917 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-09 00:43:49.503922 | orchestrator | Friday 09 January 2026 00:43:46 +0000 (0:00:00.362) 0:00:21.594 ******** 2026-01-09 00:43:49.503928 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.503933 | orchestrator | 2026-01-09 00:43:49.503938 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-09 00:43:49.503943 | orchestrator | Friday 09 January 2026 00:43:46 +0000 (0:00:00.166) 0:00:21.761 ******** 2026-01-09 00:43:49.503948 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.503953 | orchestrator | 2026-01-09 00:43:49.503958 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-09 00:43:49.503963 | orchestrator | Friday 09 January 2026 00:43:46 +0000 (0:00:00.138) 0:00:21.900 ******** 2026-01-09 00:43:49.503968 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.503973 | orchestrator | 2026-01-09 00:43:49.503978 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-09 00:43:49.503982 | orchestrator | Friday 09 January 2026 00:43:47 +0000 (0:00:00.155) 0:00:22.055 ******** 2026-01-09 00:43:49.503987 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.503992 | orchestrator | 2026-01-09 00:43:49.503998 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-09 00:43:49.504003 | orchestrator | Friday 09 January 2026 00:43:47 +0000 (0:00:00.155) 0:00:22.210 ******** 2026-01-09 00:43:49.504020 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504025 | orchestrator | 2026-01-09 00:43:49.504029 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-09 00:43:49.504034 | orchestrator | Friday 09 January 2026 00:43:47 +0000 (0:00:00.141) 0:00:22.352 ******** 2026-01-09 00:43:49.504038 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504042 | orchestrator | 2026-01-09 00:43:49.504047 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-09 00:43:49.504051 | orchestrator | Friday 09 January 2026 00:43:47 +0000 (0:00:00.142) 0:00:22.494 ******** 2026-01-09 00:43:49.504055 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504060 | orchestrator | 2026-01-09 00:43:49.504064 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-09 00:43:49.504069 | orchestrator | Friday 09 January 2026 00:43:47 +0000 (0:00:00.141) 0:00:22.636 ******** 2026-01-09 00:43:49.504073 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504077 | orchestrator | 2026-01-09 00:43:49.504082 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-09 00:43:49.504086 | orchestrator | Friday 09 January 2026 00:43:47 +0000 (0:00:00.153) 0:00:22.790 ******** 2026-01-09 00:43:49.504095 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504099 | orchestrator | 2026-01-09 00:43:49.504103 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-09 00:43:49.504108 | orchestrator | Friday 09 January 2026 00:43:48 +0000 (0:00:00.163) 0:00:22.953 ******** 2026-01-09 00:43:49.504112 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504117 | orchestrator | 2026-01-09 00:43:49.504121 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-09 00:43:49.504125 | orchestrator | Friday 09 January 2026 00:43:48 +0000 (0:00:00.136) 0:00:23.090 ******** 2026-01-09 00:43:49.504130 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504134 | orchestrator | 2026-01-09 00:43:49.504138 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-09 00:43:49.504143 | orchestrator | Friday 09 January 2026 00:43:48 +0000 (0:00:00.130) 0:00:23.220 ******** 2026-01-09 00:43:49.504148 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:49.504154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:49.504158 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504163 | orchestrator | 2026-01-09 00:43:49.504167 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-09 00:43:49.504173 | orchestrator | Friday 09 January 2026 00:43:48 +0000 (0:00:00.379) 0:00:23.600 ******** 2026-01-09 00:43:49.504181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:49.504187 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:49.504194 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504201 | orchestrator | 2026-01-09 00:43:49.504209 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-09 00:43:49.504220 | orchestrator | Friday 09 January 2026 00:43:48 +0000 (0:00:00.147) 0:00:23.747 ******** 2026-01-09 00:43:49.504229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:49.504233 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:49.504238 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504242 | orchestrator | 2026-01-09 00:43:49.504247 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-09 00:43:49.504251 | orchestrator | Friday 09 January 2026 00:43:48 +0000 (0:00:00.161) 0:00:23.909 ******** 2026-01-09 00:43:49.504255 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:49.504260 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:49.504264 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504269 | orchestrator | 2026-01-09 00:43:49.504273 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-09 00:43:49.504277 | orchestrator | Friday 09 January 2026 00:43:49 +0000 (0:00:00.154) 0:00:24.063 ******** 2026-01-09 00:43:49.504282 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:49.504286 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:49.504295 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:49.504299 | orchestrator | 2026-01-09 00:43:49.504304 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-09 00:43:49.504308 | orchestrator | Friday 09 January 2026 00:43:49 +0000 (0:00:00.165) 0:00:24.229 ******** 2026-01-09 00:43:49.504316 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:54.700783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:54.700869 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:54.700877 | orchestrator | 2026-01-09 00:43:54.700883 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-09 00:43:54.700888 | orchestrator | Friday 09 January 2026 00:43:49 +0000 (0:00:00.189) 0:00:24.419 ******** 2026-01-09 00:43:54.700893 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:54.700897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:54.700901 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:54.700905 | orchestrator | 2026-01-09 00:43:54.700909 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-09 00:43:54.700913 | orchestrator | Friday 09 January 2026 00:43:49 +0000 (0:00:00.178) 0:00:24.598 ******** 2026-01-09 00:43:54.700917 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:54.700921 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:54.700924 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:54.700928 | orchestrator | 2026-01-09 00:43:54.700932 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-09 00:43:54.700936 | orchestrator | Friday 09 January 2026 00:43:49 +0000 (0:00:00.158) 0:00:24.756 ******** 2026-01-09 00:43:54.700939 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:43:54.700945 | orchestrator | 2026-01-09 00:43:54.700948 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-09 00:43:54.700952 | orchestrator | Friday 09 January 2026 00:43:50 +0000 (0:00:00.529) 0:00:25.286 ******** 2026-01-09 00:43:54.700956 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:43:54.700959 | orchestrator | 2026-01-09 00:43:54.700963 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-09 00:43:54.700967 | orchestrator | Friday 09 January 2026 00:43:50 +0000 (0:00:00.562) 0:00:25.848 ******** 2026-01-09 00:43:54.700971 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:43:54.700974 | orchestrator | 2026-01-09 00:43:54.700978 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-09 00:43:54.700982 | orchestrator | Friday 09 January 2026 00:43:51 +0000 (0:00:00.169) 0:00:26.018 ******** 2026-01-09 00:43:54.700986 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'vg_name': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'}) 2026-01-09 00:43:54.700992 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'vg_name': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'}) 2026-01-09 00:43:54.700996 | orchestrator | 2026-01-09 00:43:54.701000 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-09 00:43:54.701004 | orchestrator | Friday 09 January 2026 00:43:51 +0000 (0:00:00.172) 0:00:26.190 ******** 2026-01-09 00:43:54.701024 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:54.701028 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:54.701032 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:54.701036 | orchestrator | 2026-01-09 00:43:54.701040 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-09 00:43:54.701043 | orchestrator | Friday 09 January 2026 00:43:51 +0000 (0:00:00.391) 0:00:26.581 ******** 2026-01-09 00:43:54.701047 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:54.701051 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:54.701055 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:54.701059 | orchestrator | 2026-01-09 00:43:54.701062 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-09 00:43:54.701066 | orchestrator | Friday 09 January 2026 00:43:51 +0000 (0:00:00.163) 0:00:26.745 ******** 2026-01-09 00:43:54.701070 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'})  2026-01-09 00:43:54.701074 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'})  2026-01-09 00:43:54.701077 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:43:54.701081 | orchestrator | 2026-01-09 00:43:54.701085 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-09 00:43:54.701089 | orchestrator | Friday 09 January 2026 00:43:51 +0000 (0:00:00.168) 0:00:26.914 ******** 2026-01-09 00:43:54.701102 | orchestrator | ok: [testbed-node-3] => { 2026-01-09 00:43:54.701106 | orchestrator |  "lvm_report": { 2026-01-09 00:43:54.701110 | orchestrator |  "lv": [ 2026-01-09 00:43:54.701114 | orchestrator |  { 2026-01-09 00:43:54.701118 | orchestrator |  "lv_name": "osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca", 2026-01-09 00:43:54.701123 | orchestrator |  "vg_name": "ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca" 2026-01-09 00:43:54.701127 | orchestrator |  }, 2026-01-09 00:43:54.701130 | orchestrator |  { 2026-01-09 00:43:54.701134 | orchestrator |  "lv_name": "osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96", 2026-01-09 00:43:54.701138 | orchestrator |  "vg_name": "ceph-8cf949ba-669c-5e80-aece-22faa35a4e96" 2026-01-09 00:43:54.701142 | orchestrator |  } 2026-01-09 00:43:54.701145 | orchestrator |  ], 2026-01-09 00:43:54.701149 | orchestrator |  "pv": [ 2026-01-09 00:43:54.701153 | orchestrator |  { 2026-01-09 00:43:54.701157 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-09 00:43:54.701161 | orchestrator |  "vg_name": "ceph-8cf949ba-669c-5e80-aece-22faa35a4e96" 2026-01-09 00:43:54.701164 | orchestrator |  }, 2026-01-09 00:43:54.701168 | orchestrator |  { 2026-01-09 00:43:54.701172 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-09 00:43:54.701175 | orchestrator |  "vg_name": "ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca" 2026-01-09 00:43:54.701191 | orchestrator |  } 2026-01-09 00:43:54.701195 | orchestrator |  ] 2026-01-09 00:43:54.701199 | orchestrator |  } 2026-01-09 00:43:54.701203 | orchestrator | } 2026-01-09 00:43:54.701207 | orchestrator | 2026-01-09 00:43:54.701211 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-09 00:43:54.701215 | orchestrator | 2026-01-09 00:43:54.701219 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-09 00:43:54.701226 | orchestrator | Friday 09 January 2026 00:43:52 +0000 (0:00:00.305) 0:00:27.219 ******** 2026-01-09 00:43:54.701230 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-09 00:43:54.701234 | orchestrator | 2026-01-09 00:43:54.701238 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-09 00:43:54.701241 | orchestrator | Friday 09 January 2026 00:43:52 +0000 (0:00:00.244) 0:00:27.463 ******** 2026-01-09 00:43:54.701245 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:43:54.701249 | orchestrator | 2026-01-09 00:43:54.701253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:54.701256 | orchestrator | Friday 09 January 2026 00:43:52 +0000 (0:00:00.248) 0:00:27.712 ******** 2026-01-09 00:43:54.701260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-09 00:43:54.701264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-09 00:43:54.701268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-09 00:43:54.701271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-09 00:43:54.701275 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-09 00:43:54.701279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-09 00:43:54.701285 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-09 00:43:54.701289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-09 00:43:54.701293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-09 00:43:54.701297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-09 00:43:54.701300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-09 00:43:54.701304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-09 00:43:54.701308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-09 00:43:54.701311 | orchestrator | 2026-01-09 00:43:54.701315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:54.701319 | orchestrator | Friday 09 January 2026 00:43:53 +0000 (0:00:00.409) 0:00:28.121 ******** 2026-01-09 00:43:54.701323 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:43:54.701326 | orchestrator | 2026-01-09 00:43:54.701331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:54.701335 | orchestrator | Friday 09 January 2026 00:43:53 +0000 (0:00:00.191) 0:00:28.313 ******** 2026-01-09 00:43:54.701340 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:43:54.701344 | orchestrator | 2026-01-09 00:43:54.701348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:54.701353 | orchestrator | Friday 09 January 2026 00:43:53 +0000 (0:00:00.202) 0:00:28.515 ******** 2026-01-09 00:43:54.701357 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:43:54.701361 | orchestrator | 2026-01-09 00:43:54.701365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:54.701370 | orchestrator | Friday 09 January 2026 00:43:54 +0000 (0:00:00.537) 0:00:29.052 ******** 2026-01-09 00:43:54.701374 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:43:54.701378 | orchestrator | 2026-01-09 00:43:54.701383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:54.701387 | orchestrator | Friday 09 January 2026 00:43:54 +0000 (0:00:00.197) 0:00:29.249 ******** 2026-01-09 00:43:54.701391 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:43:54.701395 | orchestrator | 2026-01-09 00:43:54.701400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:43:54.701407 | orchestrator | Friday 09 January 2026 00:43:54 +0000 (0:00:00.167) 0:00:29.417 ******** 2026-01-09 00:43:54.701412 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:43:54.701416 | orchestrator | 2026-01-09 00:43:54.701424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:06.066356 | orchestrator | Friday 09 January 2026 00:43:54 +0000 (0:00:00.199) 0:00:29.617 ******** 2026-01-09 00:44:06.066472 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.066485 | orchestrator | 2026-01-09 00:44:06.066495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:06.066503 | orchestrator | Friday 09 January 2026 00:43:54 +0000 (0:00:00.193) 0:00:29.810 ******** 2026-01-09 00:44:06.066511 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.066519 | orchestrator | 2026-01-09 00:44:06.066527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:06.066535 | orchestrator | Friday 09 January 2026 00:43:55 +0000 (0:00:00.192) 0:00:30.003 ******** 2026-01-09 00:44:06.066543 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430) 2026-01-09 00:44:06.066552 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430) 2026-01-09 00:44:06.066559 | orchestrator | 2026-01-09 00:44:06.066567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:06.066575 | orchestrator | Friday 09 January 2026 00:43:55 +0000 (0:00:00.398) 0:00:30.402 ******** 2026-01-09 00:44:06.066582 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e) 2026-01-09 00:44:06.066590 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e) 2026-01-09 00:44:06.066597 | orchestrator | 2026-01-09 00:44:06.066605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:06.066612 | orchestrator | Friday 09 January 2026 00:43:55 +0000 (0:00:00.372) 0:00:30.774 ******** 2026-01-09 00:44:06.066620 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541) 2026-01-09 00:44:06.066628 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541) 2026-01-09 00:44:06.066680 | orchestrator | 2026-01-09 00:44:06.066689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:06.066697 | orchestrator | Friday 09 January 2026 00:43:56 +0000 (0:00:00.373) 0:00:31.148 ******** 2026-01-09 00:44:06.066704 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795) 2026-01-09 00:44:06.066712 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795) 2026-01-09 00:44:06.066719 | orchestrator | 2026-01-09 00:44:06.066727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:06.066734 | orchestrator | Friday 09 January 2026 00:43:56 +0000 (0:00:00.566) 0:00:31.715 ******** 2026-01-09 00:44:06.066742 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-09 00:44:06.066749 | orchestrator | 2026-01-09 00:44:06.066757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.066764 | orchestrator | Friday 09 January 2026 00:43:57 +0000 (0:00:00.494) 0:00:32.210 ******** 2026-01-09 00:44:06.066786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-09 00:44:06.066795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-09 00:44:06.066802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-09 00:44:06.066809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-09 00:44:06.066817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-09 00:44:06.066847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-09 00:44:06.066855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-09 00:44:06.066864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-09 00:44:06.066872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-09 00:44:06.066882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-09 00:44:06.066890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-09 00:44:06.066899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-09 00:44:06.066908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-09 00:44:06.066917 | orchestrator | 2026-01-09 00:44:06.066926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.066935 | orchestrator | Friday 09 January 2026 00:43:58 +0000 (0:00:00.729) 0:00:32.939 ******** 2026-01-09 00:44:06.066943 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.066952 | orchestrator | 2026-01-09 00:44:06.066961 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.066970 | orchestrator | Friday 09 January 2026 00:43:58 +0000 (0:00:00.313) 0:00:33.253 ******** 2026-01-09 00:44:06.066979 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.066987 | orchestrator | 2026-01-09 00:44:06.066996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.067006 | orchestrator | Friday 09 January 2026 00:43:58 +0000 (0:00:00.244) 0:00:33.497 ******** 2026-01-09 00:44:06.067015 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067023 | orchestrator | 2026-01-09 00:44:06.067048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.067057 | orchestrator | Friday 09 January 2026 00:43:58 +0000 (0:00:00.283) 0:00:33.780 ******** 2026-01-09 00:44:06.067067 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067075 | orchestrator | 2026-01-09 00:44:06.067084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.067094 | orchestrator | Friday 09 January 2026 00:43:59 +0000 (0:00:00.250) 0:00:34.031 ******** 2026-01-09 00:44:06.067102 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067112 | orchestrator | 2026-01-09 00:44:06.067120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.067129 | orchestrator | Friday 09 January 2026 00:43:59 +0000 (0:00:00.278) 0:00:34.310 ******** 2026-01-09 00:44:06.067138 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067147 | orchestrator | 2026-01-09 00:44:06.067156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.067164 | orchestrator | Friday 09 January 2026 00:43:59 +0000 (0:00:00.296) 0:00:34.606 ******** 2026-01-09 00:44:06.067174 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067182 | orchestrator | 2026-01-09 00:44:06.067191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.067199 | orchestrator | Friday 09 January 2026 00:43:59 +0000 (0:00:00.259) 0:00:34.866 ******** 2026-01-09 00:44:06.067208 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067217 | orchestrator | 2026-01-09 00:44:06.067226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.067234 | orchestrator | Friday 09 January 2026 00:44:00 +0000 (0:00:00.243) 0:00:35.109 ******** 2026-01-09 00:44:06.067242 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-09 00:44:06.067249 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-09 00:44:06.067258 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-09 00:44:06.067265 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-09 00:44:06.067280 | orchestrator | 2026-01-09 00:44:06.067287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.067295 | orchestrator | Friday 09 January 2026 00:44:01 +0000 (0:00:00.859) 0:00:35.969 ******** 2026-01-09 00:44:06.067302 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067309 | orchestrator | 2026-01-09 00:44:06.067316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.067324 | orchestrator | Friday 09 January 2026 00:44:01 +0000 (0:00:00.201) 0:00:36.170 ******** 2026-01-09 00:44:06.067331 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067338 | orchestrator | 2026-01-09 00:44:06.067345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.067353 | orchestrator | Friday 09 January 2026 00:44:01 +0000 (0:00:00.528) 0:00:36.699 ******** 2026-01-09 00:44:06.067360 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067367 | orchestrator | 2026-01-09 00:44:06.067375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:06.067382 | orchestrator | Friday 09 January 2026 00:44:01 +0000 (0:00:00.185) 0:00:36.885 ******** 2026-01-09 00:44:06.067389 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067396 | orchestrator | 2026-01-09 00:44:06.067404 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-09 00:44:06.067412 | orchestrator | Friday 09 January 2026 00:44:02 +0000 (0:00:00.239) 0:00:37.124 ******** 2026-01-09 00:44:06.067419 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067426 | orchestrator | 2026-01-09 00:44:06.067434 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-09 00:44:06.067442 | orchestrator | Friday 09 January 2026 00:44:02 +0000 (0:00:00.149) 0:00:37.274 ******** 2026-01-09 00:44:06.067449 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2edbad7c-3e58-5742-8752-3a5bd5d561b5'}}) 2026-01-09 00:44:06.067457 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '209c90a3-928e-55d9-9ec8-b900c012dcc3'}}) 2026-01-09 00:44:06.067465 | orchestrator | 2026-01-09 00:44:06.067472 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-09 00:44:06.067479 | orchestrator | Friday 09 January 2026 00:44:02 +0000 (0:00:00.165) 0:00:37.440 ******** 2026-01-09 00:44:06.067488 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'}) 2026-01-09 00:44:06.067497 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'}) 2026-01-09 00:44:06.067508 | orchestrator | 2026-01-09 00:44:06.067521 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-09 00:44:06.067533 | orchestrator | Friday 09 January 2026 00:44:04 +0000 (0:00:01.941) 0:00:39.381 ******** 2026-01-09 00:44:06.067545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:06.067558 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:06.067570 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:06.067582 | orchestrator | 2026-01-09 00:44:06.067594 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-09 00:44:06.067606 | orchestrator | Friday 09 January 2026 00:44:04 +0000 (0:00:00.158) 0:00:39.540 ******** 2026-01-09 00:44:06.067619 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'}) 2026-01-09 00:44:06.067655 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'}) 2026-01-09 00:44:11.880293 | orchestrator | 2026-01-09 00:44:11.880415 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-09 00:44:11.880430 | orchestrator | Friday 09 January 2026 00:44:06 +0000 (0:00:01.439) 0:00:40.979 ******** 2026-01-09 00:44:11.880457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:11.880470 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:11.880479 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.880489 | orchestrator | 2026-01-09 00:44:11.880498 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-09 00:44:11.880507 | orchestrator | Friday 09 January 2026 00:44:06 +0000 (0:00:00.170) 0:00:41.150 ******** 2026-01-09 00:44:11.880517 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.880526 | orchestrator | 2026-01-09 00:44:11.880535 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-09 00:44:11.880544 | orchestrator | Friday 09 January 2026 00:44:06 +0000 (0:00:00.158) 0:00:41.309 ******** 2026-01-09 00:44:11.880553 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:11.880562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:11.880570 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.880579 | orchestrator | 2026-01-09 00:44:11.880588 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-09 00:44:11.880597 | orchestrator | Friday 09 January 2026 00:44:06 +0000 (0:00:00.164) 0:00:41.473 ******** 2026-01-09 00:44:11.880605 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.880614 | orchestrator | 2026-01-09 00:44:11.880623 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-09 00:44:11.880668 | orchestrator | Friday 09 January 2026 00:44:06 +0000 (0:00:00.140) 0:00:41.613 ******** 2026-01-09 00:44:11.880678 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:11.880688 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:11.880697 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.880706 | orchestrator | 2026-01-09 00:44:11.880715 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-09 00:44:11.880729 | orchestrator | Friday 09 January 2026 00:44:07 +0000 (0:00:00.400) 0:00:42.014 ******** 2026-01-09 00:44:11.880738 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.880747 | orchestrator | 2026-01-09 00:44:11.880756 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-09 00:44:11.880765 | orchestrator | Friday 09 January 2026 00:44:07 +0000 (0:00:00.141) 0:00:42.155 ******** 2026-01-09 00:44:11.880774 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:11.880783 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:11.880793 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.880803 | orchestrator | 2026-01-09 00:44:11.880814 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-09 00:44:11.880825 | orchestrator | Friday 09 January 2026 00:44:07 +0000 (0:00:00.161) 0:00:42.316 ******** 2026-01-09 00:44:11.880835 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:44:11.880868 | orchestrator | 2026-01-09 00:44:11.880879 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-09 00:44:11.880889 | orchestrator | Friday 09 January 2026 00:44:07 +0000 (0:00:00.162) 0:00:42.479 ******** 2026-01-09 00:44:11.880900 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:11.880911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:11.880921 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.880931 | orchestrator | 2026-01-09 00:44:11.880941 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-09 00:44:11.880951 | orchestrator | Friday 09 January 2026 00:44:07 +0000 (0:00:00.160) 0:00:42.640 ******** 2026-01-09 00:44:11.880962 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:11.880972 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:11.880982 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.880992 | orchestrator | 2026-01-09 00:44:11.881002 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-09 00:44:11.881029 | orchestrator | Friday 09 January 2026 00:44:07 +0000 (0:00:00.168) 0:00:42.809 ******** 2026-01-09 00:44:11.881040 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:11.881050 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:11.881060 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.881070 | orchestrator | 2026-01-09 00:44:11.881081 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-09 00:44:11.881091 | orchestrator | Friday 09 January 2026 00:44:08 +0000 (0:00:00.170) 0:00:42.979 ******** 2026-01-09 00:44:11.881101 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.881111 | orchestrator | 2026-01-09 00:44:11.881121 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-09 00:44:11.881131 | orchestrator | Friday 09 January 2026 00:44:08 +0000 (0:00:00.134) 0:00:43.114 ******** 2026-01-09 00:44:11.881142 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.881151 | orchestrator | 2026-01-09 00:44:11.881160 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-09 00:44:11.881168 | orchestrator | Friday 09 January 2026 00:44:08 +0000 (0:00:00.137) 0:00:43.251 ******** 2026-01-09 00:44:11.881177 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.881186 | orchestrator | 2026-01-09 00:44:11.881194 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-09 00:44:11.881209 | orchestrator | Friday 09 January 2026 00:44:08 +0000 (0:00:00.161) 0:00:43.413 ******** 2026-01-09 00:44:11.881224 | orchestrator | ok: [testbed-node-4] => { 2026-01-09 00:44:11.881238 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-09 00:44:11.881252 | orchestrator | } 2026-01-09 00:44:11.881269 | orchestrator | 2026-01-09 00:44:11.881284 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-09 00:44:11.881299 | orchestrator | Friday 09 January 2026 00:44:08 +0000 (0:00:00.149) 0:00:43.563 ******** 2026-01-09 00:44:11.881309 | orchestrator | ok: [testbed-node-4] => { 2026-01-09 00:44:11.881318 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-09 00:44:11.881326 | orchestrator | } 2026-01-09 00:44:11.881335 | orchestrator | 2026-01-09 00:44:11.881344 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-09 00:44:11.881352 | orchestrator | Friday 09 January 2026 00:44:08 +0000 (0:00:00.152) 0:00:43.715 ******** 2026-01-09 00:44:11.881370 | orchestrator | ok: [testbed-node-4] => { 2026-01-09 00:44:11.881379 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-09 00:44:11.881388 | orchestrator | } 2026-01-09 00:44:11.881397 | orchestrator | 2026-01-09 00:44:11.881405 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-09 00:44:11.881414 | orchestrator | Friday 09 January 2026 00:44:09 +0000 (0:00:00.353) 0:00:44.069 ******** 2026-01-09 00:44:11.881423 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:44:11.881432 | orchestrator | 2026-01-09 00:44:11.881440 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-09 00:44:11.881455 | orchestrator | Friday 09 January 2026 00:44:09 +0000 (0:00:00.533) 0:00:44.603 ******** 2026-01-09 00:44:11.881464 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:44:11.881472 | orchestrator | 2026-01-09 00:44:11.881481 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-09 00:44:11.881490 | orchestrator | Friday 09 January 2026 00:44:10 +0000 (0:00:00.540) 0:00:45.144 ******** 2026-01-09 00:44:11.881498 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:44:11.881507 | orchestrator | 2026-01-09 00:44:11.881516 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-09 00:44:11.881524 | orchestrator | Friday 09 January 2026 00:44:10 +0000 (0:00:00.551) 0:00:45.696 ******** 2026-01-09 00:44:11.881533 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:44:11.881541 | orchestrator | 2026-01-09 00:44:11.881550 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-09 00:44:11.881558 | orchestrator | Friday 09 January 2026 00:44:10 +0000 (0:00:00.157) 0:00:45.853 ******** 2026-01-09 00:44:11.881567 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.881575 | orchestrator | 2026-01-09 00:44:11.881584 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-09 00:44:11.881593 | orchestrator | Friday 09 January 2026 00:44:11 +0000 (0:00:00.121) 0:00:45.974 ******** 2026-01-09 00:44:11.881601 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.881610 | orchestrator | 2026-01-09 00:44:11.881618 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-09 00:44:11.881627 | orchestrator | Friday 09 January 2026 00:44:11 +0000 (0:00:00.105) 0:00:46.080 ******** 2026-01-09 00:44:11.881664 | orchestrator | ok: [testbed-node-4] => { 2026-01-09 00:44:11.881674 | orchestrator |  "vgs_report": { 2026-01-09 00:44:11.881682 | orchestrator |  "vg": [] 2026-01-09 00:44:11.881691 | orchestrator |  } 2026-01-09 00:44:11.881700 | orchestrator | } 2026-01-09 00:44:11.881709 | orchestrator | 2026-01-09 00:44:11.881718 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-09 00:44:11.881726 | orchestrator | Friday 09 January 2026 00:44:11 +0000 (0:00:00.150) 0:00:46.231 ******** 2026-01-09 00:44:11.881735 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.881744 | orchestrator | 2026-01-09 00:44:11.881752 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-09 00:44:11.881761 | orchestrator | Friday 09 January 2026 00:44:11 +0000 (0:00:00.141) 0:00:46.372 ******** 2026-01-09 00:44:11.881770 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.881778 | orchestrator | 2026-01-09 00:44:11.881787 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-09 00:44:11.881796 | orchestrator | Friday 09 January 2026 00:44:11 +0000 (0:00:00.129) 0:00:46.502 ******** 2026-01-09 00:44:11.881804 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.881813 | orchestrator | 2026-01-09 00:44:11.881822 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-09 00:44:11.881830 | orchestrator | Friday 09 January 2026 00:44:11 +0000 (0:00:00.138) 0:00:46.641 ******** 2026-01-09 00:44:11.881839 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:11.881848 | orchestrator | 2026-01-09 00:44:11.881863 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-09 00:44:16.928739 | orchestrator | Friday 09 January 2026 00:44:11 +0000 (0:00:00.154) 0:00:46.795 ******** 2026-01-09 00:44:16.928844 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.928854 | orchestrator | 2026-01-09 00:44:16.928861 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-09 00:44:16.928867 | orchestrator | Friday 09 January 2026 00:44:12 +0000 (0:00:00.378) 0:00:47.174 ******** 2026-01-09 00:44:16.928872 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.928876 | orchestrator | 2026-01-09 00:44:16.928882 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-09 00:44:16.928887 | orchestrator | Friday 09 January 2026 00:44:12 +0000 (0:00:00.137) 0:00:47.312 ******** 2026-01-09 00:44:16.928892 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.928897 | orchestrator | 2026-01-09 00:44:16.928902 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-09 00:44:16.928906 | orchestrator | Friday 09 January 2026 00:44:12 +0000 (0:00:00.142) 0:00:47.454 ******** 2026-01-09 00:44:16.928911 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.928916 | orchestrator | 2026-01-09 00:44:16.928921 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-09 00:44:16.928926 | orchestrator | Friday 09 January 2026 00:44:12 +0000 (0:00:00.161) 0:00:47.616 ******** 2026-01-09 00:44:16.928930 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.928935 | orchestrator | 2026-01-09 00:44:16.928940 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-09 00:44:16.928944 | orchestrator | Friday 09 January 2026 00:44:12 +0000 (0:00:00.145) 0:00:47.761 ******** 2026-01-09 00:44:16.928949 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.928954 | orchestrator | 2026-01-09 00:44:16.928959 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-09 00:44:16.928964 | orchestrator | Friday 09 January 2026 00:44:12 +0000 (0:00:00.126) 0:00:47.888 ******** 2026-01-09 00:44:16.928968 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.928973 | orchestrator | 2026-01-09 00:44:16.928978 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-09 00:44:16.928982 | orchestrator | Friday 09 January 2026 00:44:13 +0000 (0:00:00.138) 0:00:48.027 ******** 2026-01-09 00:44:16.928987 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.928992 | orchestrator | 2026-01-09 00:44:16.928997 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-09 00:44:16.929001 | orchestrator | Friday 09 January 2026 00:44:13 +0000 (0:00:00.152) 0:00:48.179 ******** 2026-01-09 00:44:16.929006 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.929011 | orchestrator | 2026-01-09 00:44:16.929016 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-09 00:44:16.929020 | orchestrator | Friday 09 January 2026 00:44:13 +0000 (0:00:00.147) 0:00:48.326 ******** 2026-01-09 00:44:16.929025 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.929030 | orchestrator | 2026-01-09 00:44:16.929035 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-09 00:44:16.929041 | orchestrator | Friday 09 January 2026 00:44:13 +0000 (0:00:00.145) 0:00:48.472 ******** 2026-01-09 00:44:16.929046 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:16.929055 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:16.929063 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.929070 | orchestrator | 2026-01-09 00:44:16.929078 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-09 00:44:16.929086 | orchestrator | Friday 09 January 2026 00:44:13 +0000 (0:00:00.167) 0:00:48.640 ******** 2026-01-09 00:44:16.929095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:16.929109 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:16.929117 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.929125 | orchestrator | 2026-01-09 00:44:16.929132 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-09 00:44:16.929140 | orchestrator | Friday 09 January 2026 00:44:13 +0000 (0:00:00.178) 0:00:48.818 ******** 2026-01-09 00:44:16.929148 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:16.929156 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:16.929164 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.929171 | orchestrator | 2026-01-09 00:44:16.929179 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-09 00:44:16.929187 | orchestrator | Friday 09 January 2026 00:44:14 +0000 (0:00:00.403) 0:00:49.221 ******** 2026-01-09 00:44:16.929196 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:16.929203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:16.929211 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.929219 | orchestrator | 2026-01-09 00:44:16.929244 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-09 00:44:16.929252 | orchestrator | Friday 09 January 2026 00:44:14 +0000 (0:00:00.157) 0:00:49.379 ******** 2026-01-09 00:44:16.929261 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:16.929270 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:16.929278 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.929286 | orchestrator | 2026-01-09 00:44:16.929295 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-09 00:44:16.929341 | orchestrator | Friday 09 January 2026 00:44:14 +0000 (0:00:00.161) 0:00:49.540 ******** 2026-01-09 00:44:16.929351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:16.929360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:16.929369 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.929377 | orchestrator | 2026-01-09 00:44:16.929386 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-09 00:44:16.929394 | orchestrator | Friday 09 January 2026 00:44:14 +0000 (0:00:00.180) 0:00:49.720 ******** 2026-01-09 00:44:16.929448 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:16.929458 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:16.929466 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.929475 | orchestrator | 2026-01-09 00:44:16.929483 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-09 00:44:16.929492 | orchestrator | Friday 09 January 2026 00:44:14 +0000 (0:00:00.161) 0:00:49.882 ******** 2026-01-09 00:44:16.929506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:16.929519 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:16.929527 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.929535 | orchestrator | 2026-01-09 00:44:16.929543 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-09 00:44:16.929552 | orchestrator | Friday 09 January 2026 00:44:15 +0000 (0:00:00.169) 0:00:50.051 ******** 2026-01-09 00:44:16.929560 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:44:16.929569 | orchestrator | 2026-01-09 00:44:16.929577 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-09 00:44:16.929586 | orchestrator | Friday 09 January 2026 00:44:15 +0000 (0:00:00.595) 0:00:50.647 ******** 2026-01-09 00:44:16.929594 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:44:16.929602 | orchestrator | 2026-01-09 00:44:16.929610 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-09 00:44:16.929618 | orchestrator | Friday 09 January 2026 00:44:16 +0000 (0:00:00.537) 0:00:51.184 ******** 2026-01-09 00:44:16.929625 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:44:16.929650 | orchestrator | 2026-01-09 00:44:16.929658 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-09 00:44:16.929665 | orchestrator | Friday 09 January 2026 00:44:16 +0000 (0:00:00.141) 0:00:51.326 ******** 2026-01-09 00:44:16.929673 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'vg_name': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'}) 2026-01-09 00:44:16.929684 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'vg_name': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'}) 2026-01-09 00:44:16.929691 | orchestrator | 2026-01-09 00:44:16.929699 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-09 00:44:16.929707 | orchestrator | Friday 09 January 2026 00:44:16 +0000 (0:00:00.185) 0:00:51.511 ******** 2026-01-09 00:44:16.929715 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:16.929723 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:16.929731 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:16.929738 | orchestrator | 2026-01-09 00:44:16.929746 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-09 00:44:16.929754 | orchestrator | Friday 09 January 2026 00:44:16 +0000 (0:00:00.179) 0:00:51.691 ******** 2026-01-09 00:44:16.929762 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:16.929777 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:22.890478 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:22.890590 | orchestrator | 2026-01-09 00:44:22.890604 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-09 00:44:22.890613 | orchestrator | Friday 09 January 2026 00:44:16 +0000 (0:00:00.154) 0:00:51.845 ******** 2026-01-09 00:44:22.890621 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'})  2026-01-09 00:44:22.890679 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'})  2026-01-09 00:44:22.890688 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:44:22.890726 | orchestrator | 2026-01-09 00:44:22.890734 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-09 00:44:22.890742 | orchestrator | Friday 09 January 2026 00:44:17 +0000 (0:00:00.165) 0:00:52.011 ******** 2026-01-09 00:44:22.890750 | orchestrator | ok: [testbed-node-4] => { 2026-01-09 00:44:22.890758 | orchestrator |  "lvm_report": { 2026-01-09 00:44:22.890767 | orchestrator |  "lv": [ 2026-01-09 00:44:22.890774 | orchestrator |  { 2026-01-09 00:44:22.890782 | orchestrator |  "lv_name": "osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3", 2026-01-09 00:44:22.890791 | orchestrator |  "vg_name": "ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3" 2026-01-09 00:44:22.890799 | orchestrator |  }, 2026-01-09 00:44:22.890806 | orchestrator |  { 2026-01-09 00:44:22.890814 | orchestrator |  "lv_name": "osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5", 2026-01-09 00:44:22.890821 | orchestrator |  "vg_name": "ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5" 2026-01-09 00:44:22.890829 | orchestrator |  } 2026-01-09 00:44:22.890836 | orchestrator |  ], 2026-01-09 00:44:22.890850 | orchestrator |  "pv": [ 2026-01-09 00:44:22.890857 | orchestrator |  { 2026-01-09 00:44:22.890868 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-09 00:44:22.890878 | orchestrator |  "vg_name": "ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5" 2026-01-09 00:44:22.890885 | orchestrator |  }, 2026-01-09 00:44:22.890893 | orchestrator |  { 2026-01-09 00:44:22.890900 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-09 00:44:22.890907 | orchestrator |  "vg_name": "ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3" 2026-01-09 00:44:22.890914 | orchestrator |  } 2026-01-09 00:44:22.890921 | orchestrator |  ] 2026-01-09 00:44:22.890928 | orchestrator |  } 2026-01-09 00:44:22.890935 | orchestrator | } 2026-01-09 00:44:22.890942 | orchestrator | 2026-01-09 00:44:22.890949 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-09 00:44:22.890956 | orchestrator | 2026-01-09 00:44:22.890964 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-09 00:44:22.890985 | orchestrator | Friday 09 January 2026 00:44:17 +0000 (0:00:00.519) 0:00:52.531 ******** 2026-01-09 00:44:22.890993 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-09 00:44:22.891001 | orchestrator | 2026-01-09 00:44:22.891010 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-09 00:44:22.891017 | orchestrator | Friday 09 January 2026 00:44:17 +0000 (0:00:00.254) 0:00:52.785 ******** 2026-01-09 00:44:22.891025 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:44:22.891033 | orchestrator | 2026-01-09 00:44:22.891040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891046 | orchestrator | Friday 09 January 2026 00:44:18 +0000 (0:00:00.239) 0:00:53.025 ******** 2026-01-09 00:44:22.891053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-09 00:44:22.891060 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-09 00:44:22.891068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-09 00:44:22.891074 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-09 00:44:22.891082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-09 00:44:22.891089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-09 00:44:22.891096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-09 00:44:22.891103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-09 00:44:22.891110 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-09 00:44:22.891126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-09 00:44:22.891133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-09 00:44:22.891141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-09 00:44:22.891148 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-09 00:44:22.891155 | orchestrator | 2026-01-09 00:44:22.891166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891172 | orchestrator | Friday 09 January 2026 00:44:18 +0000 (0:00:00.391) 0:00:53.416 ******** 2026-01-09 00:44:22.891179 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:22.891186 | orchestrator | 2026-01-09 00:44:22.891193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891201 | orchestrator | Friday 09 January 2026 00:44:18 +0000 (0:00:00.228) 0:00:53.645 ******** 2026-01-09 00:44:22.891209 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:22.891217 | orchestrator | 2026-01-09 00:44:22.891224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891248 | orchestrator | Friday 09 January 2026 00:44:18 +0000 (0:00:00.227) 0:00:53.873 ******** 2026-01-09 00:44:22.891254 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:22.891259 | orchestrator | 2026-01-09 00:44:22.891264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891269 | orchestrator | Friday 09 January 2026 00:44:19 +0000 (0:00:00.208) 0:00:54.082 ******** 2026-01-09 00:44:22.891274 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:22.891279 | orchestrator | 2026-01-09 00:44:22.891284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891289 | orchestrator | Friday 09 January 2026 00:44:19 +0000 (0:00:00.184) 0:00:54.266 ******** 2026-01-09 00:44:22.891294 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:22.891299 | orchestrator | 2026-01-09 00:44:22.891304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891309 | orchestrator | Friday 09 January 2026 00:44:19 +0000 (0:00:00.483) 0:00:54.750 ******** 2026-01-09 00:44:22.891315 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:22.891320 | orchestrator | 2026-01-09 00:44:22.891325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891330 | orchestrator | Friday 09 January 2026 00:44:20 +0000 (0:00:00.180) 0:00:54.931 ******** 2026-01-09 00:44:22.891335 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:22.891340 | orchestrator | 2026-01-09 00:44:22.891346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891351 | orchestrator | Friday 09 January 2026 00:44:20 +0000 (0:00:00.200) 0:00:55.131 ******** 2026-01-09 00:44:22.891356 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:22.891361 | orchestrator | 2026-01-09 00:44:22.891365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891369 | orchestrator | Friday 09 January 2026 00:44:20 +0000 (0:00:00.196) 0:00:55.328 ******** 2026-01-09 00:44:22.891374 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b) 2026-01-09 00:44:22.891380 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b) 2026-01-09 00:44:22.891384 | orchestrator | 2026-01-09 00:44:22.891389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891393 | orchestrator | Friday 09 January 2026 00:44:20 +0000 (0:00:00.378) 0:00:55.707 ******** 2026-01-09 00:44:22.891397 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88) 2026-01-09 00:44:22.891402 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88) 2026-01-09 00:44:22.891406 | orchestrator | 2026-01-09 00:44:22.891416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891426 | orchestrator | Friday 09 January 2026 00:44:21 +0000 (0:00:00.393) 0:00:56.101 ******** 2026-01-09 00:44:22.891431 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338) 2026-01-09 00:44:22.891435 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338) 2026-01-09 00:44:22.891439 | orchestrator | 2026-01-09 00:44:22.891444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891448 | orchestrator | Friday 09 January 2026 00:44:21 +0000 (0:00:00.442) 0:00:56.544 ******** 2026-01-09 00:44:22.891452 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595) 2026-01-09 00:44:22.891457 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595) 2026-01-09 00:44:22.891461 | orchestrator | 2026-01-09 00:44:22.891466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-09 00:44:22.891470 | orchestrator | Friday 09 January 2026 00:44:22 +0000 (0:00:00.453) 0:00:56.997 ******** 2026-01-09 00:44:22.891474 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-09 00:44:22.891479 | orchestrator | 2026-01-09 00:44:22.891483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:22.891488 | orchestrator | Friday 09 January 2026 00:44:22 +0000 (0:00:00.358) 0:00:57.355 ******** 2026-01-09 00:44:22.891492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-09 00:44:22.891496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-09 00:44:22.891501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-09 00:44:22.891505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-09 00:44:22.891509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-09 00:44:22.891514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-09 00:44:22.891518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-09 00:44:22.891522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-09 00:44:22.891527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-09 00:44:22.891531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-09 00:44:22.891535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-09 00:44:22.891543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-09 00:44:32.703974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-09 00:44:32.704063 | orchestrator | 2026-01-09 00:44:32.704070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704075 | orchestrator | Friday 09 January 2026 00:44:22 +0000 (0:00:00.443) 0:00:57.799 ******** 2026-01-09 00:44:32.704080 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704085 | orchestrator | 2026-01-09 00:44:32.704090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704094 | orchestrator | Friday 09 January 2026 00:44:23 +0000 (0:00:00.211) 0:00:58.010 ******** 2026-01-09 00:44:32.704098 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704103 | orchestrator | 2026-01-09 00:44:32.704107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704111 | orchestrator | Friday 09 January 2026 00:44:23 +0000 (0:00:00.864) 0:00:58.875 ******** 2026-01-09 00:44:32.704133 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704138 | orchestrator | 2026-01-09 00:44:32.704142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704146 | orchestrator | Friday 09 January 2026 00:44:24 +0000 (0:00:00.225) 0:00:59.101 ******** 2026-01-09 00:44:32.704151 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704155 | orchestrator | 2026-01-09 00:44:32.704159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704163 | orchestrator | Friday 09 January 2026 00:44:24 +0000 (0:00:00.244) 0:00:59.345 ******** 2026-01-09 00:44:32.704167 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704171 | orchestrator | 2026-01-09 00:44:32.704175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704179 | orchestrator | Friday 09 January 2026 00:44:24 +0000 (0:00:00.301) 0:00:59.647 ******** 2026-01-09 00:44:32.704183 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704187 | orchestrator | 2026-01-09 00:44:32.704192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704196 | orchestrator | Friday 09 January 2026 00:44:24 +0000 (0:00:00.234) 0:00:59.882 ******** 2026-01-09 00:44:32.704200 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704204 | orchestrator | 2026-01-09 00:44:32.704208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704212 | orchestrator | Friday 09 January 2026 00:44:25 +0000 (0:00:00.218) 0:01:00.101 ******** 2026-01-09 00:44:32.704216 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704220 | orchestrator | 2026-01-09 00:44:32.704224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704228 | orchestrator | Friday 09 January 2026 00:44:25 +0000 (0:00:00.215) 0:01:00.316 ******** 2026-01-09 00:44:32.704233 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-09 00:44:32.704238 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-09 00:44:32.704242 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-09 00:44:32.704247 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-09 00:44:32.704251 | orchestrator | 2026-01-09 00:44:32.704255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704259 | orchestrator | Friday 09 January 2026 00:44:26 +0000 (0:00:00.655) 0:01:00.971 ******** 2026-01-09 00:44:32.704263 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704267 | orchestrator | 2026-01-09 00:44:32.704271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704276 | orchestrator | Friday 09 January 2026 00:44:26 +0000 (0:00:00.222) 0:01:01.194 ******** 2026-01-09 00:44:32.704280 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704285 | orchestrator | 2026-01-09 00:44:32.704289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704293 | orchestrator | Friday 09 January 2026 00:44:26 +0000 (0:00:00.265) 0:01:01.459 ******** 2026-01-09 00:44:32.704297 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704301 | orchestrator | 2026-01-09 00:44:32.704305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-09 00:44:32.704309 | orchestrator | Friday 09 January 2026 00:44:26 +0000 (0:00:00.190) 0:01:01.650 ******** 2026-01-09 00:44:32.704313 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704317 | orchestrator | 2026-01-09 00:44:32.704321 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-09 00:44:32.704325 | orchestrator | Friday 09 January 2026 00:44:27 +0000 (0:00:00.286) 0:01:01.936 ******** 2026-01-09 00:44:32.704329 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704334 | orchestrator | 2026-01-09 00:44:32.704338 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-09 00:44:32.704342 | orchestrator | Friday 09 January 2026 00:44:27 +0000 (0:00:00.356) 0:01:02.293 ******** 2026-01-09 00:44:32.704346 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11533966-1bdf-5daf-a468-949db0b9bc1b'}}) 2026-01-09 00:44:32.704355 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'}}) 2026-01-09 00:44:32.704359 | orchestrator | 2026-01-09 00:44:32.704363 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-09 00:44:32.704367 | orchestrator | Friday 09 January 2026 00:44:27 +0000 (0:00:00.203) 0:01:02.497 ******** 2026-01-09 00:44:32.704372 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'}) 2026-01-09 00:44:32.704390 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'}) 2026-01-09 00:44:32.704394 | orchestrator | 2026-01-09 00:44:32.704398 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-09 00:44:32.704413 | orchestrator | Friday 09 January 2026 00:44:29 +0000 (0:00:01.878) 0:01:04.376 ******** 2026-01-09 00:44:32.704417 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:32.704423 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:32.704427 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704431 | orchestrator | 2026-01-09 00:44:32.704435 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-09 00:44:32.704440 | orchestrator | Friday 09 January 2026 00:44:29 +0000 (0:00:00.189) 0:01:04.565 ******** 2026-01-09 00:44:32.704444 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'}) 2026-01-09 00:44:32.704448 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'}) 2026-01-09 00:44:32.704452 | orchestrator | 2026-01-09 00:44:32.704456 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-09 00:44:32.704460 | orchestrator | Friday 09 January 2026 00:44:30 +0000 (0:00:01.329) 0:01:05.895 ******** 2026-01-09 00:44:32.704465 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:32.704469 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:32.704473 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704480 | orchestrator | 2026-01-09 00:44:32.704487 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-09 00:44:32.704493 | orchestrator | Friday 09 January 2026 00:44:31 +0000 (0:00:00.180) 0:01:06.076 ******** 2026-01-09 00:44:32.704501 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704508 | orchestrator | 2026-01-09 00:44:32.704515 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-09 00:44:32.704522 | orchestrator | Friday 09 January 2026 00:44:31 +0000 (0:00:00.149) 0:01:06.226 ******** 2026-01-09 00:44:32.704544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:32.704553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:32.704561 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704569 | orchestrator | 2026-01-09 00:44:32.704576 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-09 00:44:32.704585 | orchestrator | Friday 09 January 2026 00:44:31 +0000 (0:00:00.167) 0:01:06.394 ******** 2026-01-09 00:44:32.704590 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704595 | orchestrator | 2026-01-09 00:44:32.704600 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-09 00:44:32.704604 | orchestrator | Friday 09 January 2026 00:44:31 +0000 (0:00:00.147) 0:01:06.541 ******** 2026-01-09 00:44:32.704609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:32.704614 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:32.704637 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704642 | orchestrator | 2026-01-09 00:44:32.704648 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-09 00:44:32.704653 | orchestrator | Friday 09 January 2026 00:44:31 +0000 (0:00:00.147) 0:01:06.689 ******** 2026-01-09 00:44:32.704658 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704662 | orchestrator | 2026-01-09 00:44:32.704667 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-09 00:44:32.704672 | orchestrator | Friday 09 January 2026 00:44:31 +0000 (0:00:00.140) 0:01:06.829 ******** 2026-01-09 00:44:32.704677 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:32.704682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:32.704687 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:32.704691 | orchestrator | 2026-01-09 00:44:32.704696 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-09 00:44:32.704701 | orchestrator | Friday 09 January 2026 00:44:32 +0000 (0:00:00.212) 0:01:07.042 ******** 2026-01-09 00:44:32.704706 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:44:32.704711 | orchestrator | 2026-01-09 00:44:32.704715 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-09 00:44:32.704720 | orchestrator | Friday 09 January 2026 00:44:32 +0000 (0:00:00.417) 0:01:07.460 ******** 2026-01-09 00:44:32.704730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:38.732285 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:38.732407 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.732419 | orchestrator | 2026-01-09 00:44:38.732426 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-09 00:44:38.732435 | orchestrator | Friday 09 January 2026 00:44:32 +0000 (0:00:00.161) 0:01:07.622 ******** 2026-01-09 00:44:38.732444 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:38.732450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:38.732456 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.732462 | orchestrator | 2026-01-09 00:44:38.732469 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-09 00:44:38.732475 | orchestrator | Friday 09 January 2026 00:44:32 +0000 (0:00:00.183) 0:01:07.805 ******** 2026-01-09 00:44:38.732482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:38.732489 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:38.732517 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.732524 | orchestrator | 2026-01-09 00:44:38.732531 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-09 00:44:38.732537 | orchestrator | Friday 09 January 2026 00:44:33 +0000 (0:00:00.181) 0:01:07.987 ******** 2026-01-09 00:44:38.732543 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.732550 | orchestrator | 2026-01-09 00:44:38.732556 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-09 00:44:38.732562 | orchestrator | Friday 09 January 2026 00:44:33 +0000 (0:00:00.146) 0:01:08.133 ******** 2026-01-09 00:44:38.732569 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.732575 | orchestrator | 2026-01-09 00:44:38.732581 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-09 00:44:38.732588 | orchestrator | Friday 09 January 2026 00:44:33 +0000 (0:00:00.144) 0:01:08.277 ******** 2026-01-09 00:44:38.732594 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.732600 | orchestrator | 2026-01-09 00:44:38.732659 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-09 00:44:38.732668 | orchestrator | Friday 09 January 2026 00:44:33 +0000 (0:00:00.143) 0:01:08.421 ******** 2026-01-09 00:44:38.732674 | orchestrator | ok: [testbed-node-5] => { 2026-01-09 00:44:38.732682 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-09 00:44:38.732689 | orchestrator | } 2026-01-09 00:44:38.732696 | orchestrator | 2026-01-09 00:44:38.732702 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-09 00:44:38.732709 | orchestrator | Friday 09 January 2026 00:44:33 +0000 (0:00:00.125) 0:01:08.546 ******** 2026-01-09 00:44:38.732715 | orchestrator | ok: [testbed-node-5] => { 2026-01-09 00:44:38.732721 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-09 00:44:38.732729 | orchestrator | } 2026-01-09 00:44:38.732735 | orchestrator | 2026-01-09 00:44:38.732742 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-09 00:44:38.732748 | orchestrator | Friday 09 January 2026 00:44:33 +0000 (0:00:00.156) 0:01:08.703 ******** 2026-01-09 00:44:38.732755 | orchestrator | ok: [testbed-node-5] => { 2026-01-09 00:44:38.732762 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-09 00:44:38.732768 | orchestrator | } 2026-01-09 00:44:38.732774 | orchestrator | 2026-01-09 00:44:38.732781 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-09 00:44:38.732787 | orchestrator | Friday 09 January 2026 00:44:33 +0000 (0:00:00.149) 0:01:08.852 ******** 2026-01-09 00:44:38.732794 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:44:38.732800 | orchestrator | 2026-01-09 00:44:38.732806 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-09 00:44:38.732813 | orchestrator | Friday 09 January 2026 00:44:34 +0000 (0:00:00.501) 0:01:09.354 ******** 2026-01-09 00:44:38.732819 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:44:38.732826 | orchestrator | 2026-01-09 00:44:38.732832 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-09 00:44:38.732839 | orchestrator | Friday 09 January 2026 00:44:34 +0000 (0:00:00.510) 0:01:09.864 ******** 2026-01-09 00:44:38.732845 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:44:38.732850 | orchestrator | 2026-01-09 00:44:38.732854 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-09 00:44:38.732859 | orchestrator | Friday 09 January 2026 00:44:35 +0000 (0:00:00.699) 0:01:10.563 ******** 2026-01-09 00:44:38.732863 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:44:38.732868 | orchestrator | 2026-01-09 00:44:38.732873 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-09 00:44:38.732877 | orchestrator | Friday 09 January 2026 00:44:35 +0000 (0:00:00.161) 0:01:10.724 ******** 2026-01-09 00:44:38.732882 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.732886 | orchestrator | 2026-01-09 00:44:38.732891 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-09 00:44:38.732900 | orchestrator | Friday 09 January 2026 00:44:35 +0000 (0:00:00.106) 0:01:10.831 ******** 2026-01-09 00:44:38.732904 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.732909 | orchestrator | 2026-01-09 00:44:38.732913 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-09 00:44:38.732918 | orchestrator | Friday 09 January 2026 00:44:36 +0000 (0:00:00.102) 0:01:10.933 ******** 2026-01-09 00:44:38.732922 | orchestrator | ok: [testbed-node-5] => { 2026-01-09 00:44:38.732927 | orchestrator |  "vgs_report": { 2026-01-09 00:44:38.732932 | orchestrator |  "vg": [] 2026-01-09 00:44:38.732950 | orchestrator |  } 2026-01-09 00:44:38.732955 | orchestrator | } 2026-01-09 00:44:38.732960 | orchestrator | 2026-01-09 00:44:38.732964 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-09 00:44:38.732969 | orchestrator | Friday 09 January 2026 00:44:36 +0000 (0:00:00.159) 0:01:11.093 ******** 2026-01-09 00:44:38.732973 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.732977 | orchestrator | 2026-01-09 00:44:38.732981 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-09 00:44:38.732985 | orchestrator | Friday 09 January 2026 00:44:36 +0000 (0:00:00.142) 0:01:11.235 ******** 2026-01-09 00:44:38.732988 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.732992 | orchestrator | 2026-01-09 00:44:38.732996 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-09 00:44:38.733000 | orchestrator | Friday 09 January 2026 00:44:36 +0000 (0:00:00.154) 0:01:11.389 ******** 2026-01-09 00:44:38.733003 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733007 | orchestrator | 2026-01-09 00:44:38.733011 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-09 00:44:38.733015 | orchestrator | Friday 09 January 2026 00:44:36 +0000 (0:00:00.131) 0:01:11.521 ******** 2026-01-09 00:44:38.733018 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733022 | orchestrator | 2026-01-09 00:44:38.733026 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-09 00:44:38.733029 | orchestrator | Friday 09 January 2026 00:44:36 +0000 (0:00:00.144) 0:01:11.666 ******** 2026-01-09 00:44:38.733033 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733037 | orchestrator | 2026-01-09 00:44:38.733040 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-09 00:44:38.733046 | orchestrator | Friday 09 January 2026 00:44:36 +0000 (0:00:00.133) 0:01:11.800 ******** 2026-01-09 00:44:38.733052 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733058 | orchestrator | 2026-01-09 00:44:38.733064 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-09 00:44:38.733070 | orchestrator | Friday 09 January 2026 00:44:37 +0000 (0:00:00.136) 0:01:11.936 ******** 2026-01-09 00:44:38.733076 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733081 | orchestrator | 2026-01-09 00:44:38.733087 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-09 00:44:38.733093 | orchestrator | Friday 09 January 2026 00:44:37 +0000 (0:00:00.126) 0:01:12.063 ******** 2026-01-09 00:44:38.733098 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733104 | orchestrator | 2026-01-09 00:44:38.733110 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-09 00:44:38.733116 | orchestrator | Friday 09 January 2026 00:44:37 +0000 (0:00:00.270) 0:01:12.333 ******** 2026-01-09 00:44:38.733122 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733127 | orchestrator | 2026-01-09 00:44:38.733148 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-09 00:44:38.733155 | orchestrator | Friday 09 January 2026 00:44:37 +0000 (0:00:00.155) 0:01:12.489 ******** 2026-01-09 00:44:38.733161 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733167 | orchestrator | 2026-01-09 00:44:38.733173 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-09 00:44:38.733186 | orchestrator | Friday 09 January 2026 00:44:37 +0000 (0:00:00.134) 0:01:12.623 ******** 2026-01-09 00:44:38.733191 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733197 | orchestrator | 2026-01-09 00:44:38.733204 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-09 00:44:38.733211 | orchestrator | Friday 09 January 2026 00:44:37 +0000 (0:00:00.137) 0:01:12.761 ******** 2026-01-09 00:44:38.733217 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733223 | orchestrator | 2026-01-09 00:44:38.733228 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-09 00:44:38.733234 | orchestrator | Friday 09 January 2026 00:44:37 +0000 (0:00:00.144) 0:01:12.906 ******** 2026-01-09 00:44:38.733241 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733247 | orchestrator | 2026-01-09 00:44:38.733253 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-09 00:44:38.733259 | orchestrator | Friday 09 January 2026 00:44:38 +0000 (0:00:00.141) 0:01:13.047 ******** 2026-01-09 00:44:38.733264 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733270 | orchestrator | 2026-01-09 00:44:38.733275 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-09 00:44:38.733281 | orchestrator | Friday 09 January 2026 00:44:38 +0000 (0:00:00.138) 0:01:13.186 ******** 2026-01-09 00:44:38.733287 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:38.733294 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:38.733300 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733306 | orchestrator | 2026-01-09 00:44:38.733311 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-09 00:44:38.733318 | orchestrator | Friday 09 January 2026 00:44:38 +0000 (0:00:00.144) 0:01:13.330 ******** 2026-01-09 00:44:38.733324 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:38.733330 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:38.733336 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:38.733342 | orchestrator | 2026-01-09 00:44:38.733348 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-09 00:44:38.733354 | orchestrator | Friday 09 January 2026 00:44:38 +0000 (0:00:00.164) 0:01:13.495 ******** 2026-01-09 00:44:38.733369 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:41.892208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:41.892393 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:41.892411 | orchestrator | 2026-01-09 00:44:41.892422 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-09 00:44:41.892432 | orchestrator | Friday 09 January 2026 00:44:38 +0000 (0:00:00.154) 0:01:13.650 ******** 2026-01-09 00:44:41.892440 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:41.892449 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:41.892457 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:41.892465 | orchestrator | 2026-01-09 00:44:41.892474 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-09 00:44:41.892534 | orchestrator | Friday 09 January 2026 00:44:38 +0000 (0:00:00.156) 0:01:13.806 ******** 2026-01-09 00:44:41.892591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:41.892599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:41.892608 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:41.892653 | orchestrator | 2026-01-09 00:44:41.892662 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-09 00:44:41.892670 | orchestrator | Friday 09 January 2026 00:44:39 +0000 (0:00:00.167) 0:01:13.973 ******** 2026-01-09 00:44:41.892678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:41.892687 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:41.892695 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:41.892703 | orchestrator | 2026-01-09 00:44:41.892711 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-09 00:44:41.892719 | orchestrator | Friday 09 January 2026 00:44:39 +0000 (0:00:00.390) 0:01:14.364 ******** 2026-01-09 00:44:41.892727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:41.892736 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:41.892746 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:41.892756 | orchestrator | 2026-01-09 00:44:41.892766 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-09 00:44:41.892775 | orchestrator | Friday 09 January 2026 00:44:39 +0000 (0:00:00.195) 0:01:14.560 ******** 2026-01-09 00:44:41.892785 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:41.892794 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:41.892803 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:41.892812 | orchestrator | 2026-01-09 00:44:41.892821 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-09 00:44:41.892831 | orchestrator | Friday 09 January 2026 00:44:39 +0000 (0:00:00.166) 0:01:14.726 ******** 2026-01-09 00:44:41.892840 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:44:41.892851 | orchestrator | 2026-01-09 00:44:41.892860 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-09 00:44:41.892870 | orchestrator | Friday 09 January 2026 00:44:40 +0000 (0:00:00.516) 0:01:15.242 ******** 2026-01-09 00:44:41.892879 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:44:41.892888 | orchestrator | 2026-01-09 00:44:41.892897 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-09 00:44:41.892906 | orchestrator | Friday 09 January 2026 00:44:40 +0000 (0:00:00.564) 0:01:15.807 ******** 2026-01-09 00:44:41.892915 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:44:41.892924 | orchestrator | 2026-01-09 00:44:41.892933 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-09 00:44:41.892942 | orchestrator | Friday 09 January 2026 00:44:41 +0000 (0:00:00.142) 0:01:15.950 ******** 2026-01-09 00:44:41.892952 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'vg_name': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'}) 2026-01-09 00:44:41.892963 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'vg_name': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'}) 2026-01-09 00:44:41.892980 | orchestrator | 2026-01-09 00:44:41.892989 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-09 00:44:41.892999 | orchestrator | Friday 09 January 2026 00:44:41 +0000 (0:00:00.186) 0:01:16.136 ******** 2026-01-09 00:44:41.893042 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:41.893052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:41.893062 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:41.893072 | orchestrator | 2026-01-09 00:44:41.893081 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-09 00:44:41.893091 | orchestrator | Friday 09 January 2026 00:44:41 +0000 (0:00:00.165) 0:01:16.302 ******** 2026-01-09 00:44:41.893099 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:41.893107 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:41.893115 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:41.893123 | orchestrator | 2026-01-09 00:44:41.893131 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-09 00:44:41.893139 | orchestrator | Friday 09 January 2026 00:44:41 +0000 (0:00:00.159) 0:01:16.461 ******** 2026-01-09 00:44:41.893147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'})  2026-01-09 00:44:41.893155 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'})  2026-01-09 00:44:41.893163 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:44:41.893171 | orchestrator | 2026-01-09 00:44:41.893179 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-09 00:44:41.893186 | orchestrator | Friday 09 January 2026 00:44:41 +0000 (0:00:00.165) 0:01:16.626 ******** 2026-01-09 00:44:41.893194 | orchestrator | ok: [testbed-node-5] => { 2026-01-09 00:44:41.893202 | orchestrator |  "lvm_report": { 2026-01-09 00:44:41.893210 | orchestrator |  "lv": [ 2026-01-09 00:44:41.893219 | orchestrator |  { 2026-01-09 00:44:41.893231 | orchestrator |  "lv_name": "osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b", 2026-01-09 00:44:41.893239 | orchestrator |  "vg_name": "ceph-11533966-1bdf-5daf-a468-949db0b9bc1b" 2026-01-09 00:44:41.893247 | orchestrator |  }, 2026-01-09 00:44:41.893255 | orchestrator |  { 2026-01-09 00:44:41.893263 | orchestrator |  "lv_name": "osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f", 2026-01-09 00:44:41.893271 | orchestrator |  "vg_name": "ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f" 2026-01-09 00:44:41.893279 | orchestrator |  } 2026-01-09 00:44:41.893287 | orchestrator |  ], 2026-01-09 00:44:41.893295 | orchestrator |  "pv": [ 2026-01-09 00:44:41.893303 | orchestrator |  { 2026-01-09 00:44:41.893311 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-09 00:44:41.893319 | orchestrator |  "vg_name": "ceph-11533966-1bdf-5daf-a468-949db0b9bc1b" 2026-01-09 00:44:41.893327 | orchestrator |  }, 2026-01-09 00:44:41.893335 | orchestrator |  { 2026-01-09 00:44:41.893343 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-09 00:44:41.893351 | orchestrator |  "vg_name": "ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f" 2026-01-09 00:44:41.893359 | orchestrator |  } 2026-01-09 00:44:41.893367 | orchestrator |  ] 2026-01-09 00:44:41.893381 | orchestrator |  } 2026-01-09 00:44:41.893389 | orchestrator | } 2026-01-09 00:44:41.893397 | orchestrator | 2026-01-09 00:44:41.893405 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:44:41.893413 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-09 00:44:41.893422 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-09 00:44:41.893430 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-09 00:44:41.893438 | orchestrator | 2026-01-09 00:44:41.893446 | orchestrator | 2026-01-09 00:44:41.893453 | orchestrator | 2026-01-09 00:44:41.893461 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:44:41.893469 | orchestrator | Friday 09 January 2026 00:44:41 +0000 (0:00:00.157) 0:01:16.784 ******** 2026-01-09 00:44:41.893477 | orchestrator | =============================================================================== 2026-01-09 00:44:41.893485 | orchestrator | Create block VGs -------------------------------------------------------- 5.99s 2026-01-09 00:44:41.893493 | orchestrator | Create block LVs -------------------------------------------------------- 4.24s 2026-01-09 00:44:41.893501 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.86s 2026-01-09 00:44:41.893509 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.80s 2026-01-09 00:44:41.893517 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.66s 2026-01-09 00:44:41.893525 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.64s 2026-01-09 00:44:41.893533 | orchestrator | Add known partitions to the list of available block devices ------------- 1.63s 2026-01-09 00:44:41.893541 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2026-01-09 00:44:41.893553 | orchestrator | Add known partitions to the list of available block devices ------------- 1.29s 2026-01-09 00:44:42.334717 | orchestrator | Add known links to the list of available block devices ------------------ 1.24s 2026-01-09 00:44:42.334803 | orchestrator | Print LVM report data --------------------------------------------------- 0.98s 2026-01-09 00:44:42.334810 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2026-01-09 00:44:42.334815 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-01-09 00:44:42.334820 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-01-09 00:44:42.334825 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.79s 2026-01-09 00:44:42.334830 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.76s 2026-01-09 00:44:42.334834 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2026-01-09 00:44:42.334839 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.74s 2026-01-09 00:44:42.334843 | orchestrator | Prepare variables for OSD count check ----------------------------------- 0.73s 2026-01-09 00:44:42.334848 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.72s 2026-01-09 00:44:54.940080 | orchestrator | 2026-01-09 00:44:54 | INFO  | Task 9c340ef6-040f-4a1c-a7d8-1af973cc75fe (facts) was prepared for execution. 2026-01-09 00:44:54.940211 | orchestrator | 2026-01-09 00:44:54 | INFO  | It takes a moment until task 9c340ef6-040f-4a1c-a7d8-1af973cc75fe (facts) has been started and output is visible here. 2026-01-09 00:45:07.704243 | orchestrator | 2026-01-09 00:45:07.704383 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-09 00:45:07.704413 | orchestrator | 2026-01-09 00:45:07.704434 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-09 00:45:07.704452 | orchestrator | Friday 09 January 2026 00:44:59 +0000 (0:00:00.262) 0:00:00.262 ******** 2026-01-09 00:45:07.704511 | orchestrator | ok: [testbed-manager] 2026-01-09 00:45:07.704535 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:45:07.704556 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:45:07.704574 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:45:07.704595 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:45:07.704674 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:45:07.704692 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:45:07.704712 | orchestrator | 2026-01-09 00:45:07.704749 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-09 00:45:07.704771 | orchestrator | Friday 09 January 2026 00:45:00 +0000 (0:00:01.114) 0:00:01.376 ******** 2026-01-09 00:45:07.704789 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:45:07.704809 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:45:07.704827 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:45:07.704845 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:45:07.704863 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:45:07.704881 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:45:07.704899 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:45:07.704918 | orchestrator | 2026-01-09 00:45:07.704936 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-09 00:45:07.704955 | orchestrator | 2026-01-09 00:45:07.704973 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-09 00:45:07.704991 | orchestrator | Friday 09 January 2026 00:45:01 +0000 (0:00:01.347) 0:00:02.724 ******** 2026-01-09 00:45:07.705010 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:45:07.705028 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:45:07.705046 | orchestrator | ok: [testbed-manager] 2026-01-09 00:45:07.705064 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:45:07.705083 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:45:07.705101 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:45:07.705119 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:45:07.705137 | orchestrator | 2026-01-09 00:45:07.705156 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-09 00:45:07.705173 | orchestrator | 2026-01-09 00:45:07.705192 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-09 00:45:07.705210 | orchestrator | Friday 09 January 2026 00:45:06 +0000 (0:00:04.846) 0:00:07.571 ******** 2026-01-09 00:45:07.705227 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:45:07.705247 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:45:07.705265 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:45:07.705285 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:45:07.705305 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:45:07.705324 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:45:07.705341 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:45:07.705359 | orchestrator | 2026-01-09 00:45:07.705376 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:45:07.705395 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:45:07.705415 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:45:07.705434 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:45:07.705453 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:45:07.705472 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:45:07.705490 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:45:07.705531 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:45:07.705552 | orchestrator | 2026-01-09 00:45:07.705572 | orchestrator | 2026-01-09 00:45:07.705593 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:45:07.705679 | orchestrator | Friday 09 January 2026 00:45:07 +0000 (0:00:00.563) 0:00:08.134 ******** 2026-01-09 00:45:07.705700 | orchestrator | =============================================================================== 2026-01-09 00:45:07.705719 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2026-01-09 00:45:07.705738 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.35s 2026-01-09 00:45:07.705757 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2026-01-09 00:45:07.705776 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-01-09 00:45:20.270456 | orchestrator | 2026-01-09 00:45:20 | INFO  | Task aba0fd6c-fe2a-4592-8eee-b46a8df6a047 (frr) was prepared for execution. 2026-01-09 00:45:20.270578 | orchestrator | 2026-01-09 00:45:20 | INFO  | It takes a moment until task aba0fd6c-fe2a-4592-8eee-b46a8df6a047 (frr) has been started and output is visible here. 2026-01-09 00:45:47.369260 | orchestrator | 2026-01-09 00:45:47.369417 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-09 00:45:47.369436 | orchestrator | 2026-01-09 00:45:47.369449 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-09 00:45:47.369461 | orchestrator | Friday 09 January 2026 00:45:24 +0000 (0:00:00.236) 0:00:00.236 ******** 2026-01-09 00:45:47.369473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-09 00:45:47.369487 | orchestrator | 2026-01-09 00:45:47.369498 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-09 00:45:47.369510 | orchestrator | Friday 09 January 2026 00:45:25 +0000 (0:00:00.219) 0:00:00.455 ******** 2026-01-09 00:45:47.369521 | orchestrator | changed: [testbed-manager] 2026-01-09 00:45:47.369542 | orchestrator | 2026-01-09 00:45:47.369561 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-09 00:45:47.369632 | orchestrator | Friday 09 January 2026 00:45:26 +0000 (0:00:01.284) 0:00:01.740 ******** 2026-01-09 00:45:47.369654 | orchestrator | changed: [testbed-manager] 2026-01-09 00:45:47.369672 | orchestrator | 2026-01-09 00:45:47.369691 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-09 00:45:47.369710 | orchestrator | Friday 09 January 2026 00:45:36 +0000 (0:00:10.526) 0:00:12.267 ******** 2026-01-09 00:45:47.369729 | orchestrator | ok: [testbed-manager] 2026-01-09 00:45:47.369742 | orchestrator | 2026-01-09 00:45:47.369755 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-09 00:45:47.369768 | orchestrator | Friday 09 January 2026 00:45:37 +0000 (0:00:01.099) 0:00:13.366 ******** 2026-01-09 00:45:47.369781 | orchestrator | changed: [testbed-manager] 2026-01-09 00:45:47.369794 | orchestrator | 2026-01-09 00:45:47.369806 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-09 00:45:47.369819 | orchestrator | Friday 09 January 2026 00:45:38 +0000 (0:00:01.005) 0:00:14.371 ******** 2026-01-09 00:45:47.369833 | orchestrator | ok: [testbed-manager] 2026-01-09 00:45:47.369845 | orchestrator | 2026-01-09 00:45:47.369858 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-09 00:45:47.369871 | orchestrator | Friday 09 January 2026 00:45:40 +0000 (0:00:01.222) 0:00:15.594 ******** 2026-01-09 00:45:47.369890 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:45:47.369916 | orchestrator | 2026-01-09 00:45:47.369938 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-09 00:45:47.369957 | orchestrator | Friday 09 January 2026 00:45:40 +0000 (0:00:00.162) 0:00:15.757 ******** 2026-01-09 00:45:47.370007 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:45:47.370115 | orchestrator | 2026-01-09 00:45:47.370130 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-09 00:45:47.370143 | orchestrator | Friday 09 January 2026 00:45:40 +0000 (0:00:00.169) 0:00:15.926 ******** 2026-01-09 00:45:47.370154 | orchestrator | changed: [testbed-manager] 2026-01-09 00:45:47.370165 | orchestrator | 2026-01-09 00:45:47.370176 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-09 00:45:47.370187 | orchestrator | Friday 09 January 2026 00:45:41 +0000 (0:00:01.042) 0:00:16.969 ******** 2026-01-09 00:45:47.370197 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-09 00:45:47.370209 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-09 00:45:47.370221 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-09 00:45:47.370232 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-09 00:45:47.370243 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-09 00:45:47.370254 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-09 00:45:47.370265 | orchestrator | 2026-01-09 00:45:47.370276 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-09 00:45:47.370287 | orchestrator | Friday 09 January 2026 00:45:43 +0000 (0:00:02.337) 0:00:19.307 ******** 2026-01-09 00:45:47.370298 | orchestrator | ok: [testbed-manager] 2026-01-09 00:45:47.370309 | orchestrator | 2026-01-09 00:45:47.370320 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-09 00:45:47.370330 | orchestrator | Friday 09 January 2026 00:45:45 +0000 (0:00:01.737) 0:00:21.044 ******** 2026-01-09 00:45:47.370341 | orchestrator | changed: [testbed-manager] 2026-01-09 00:45:47.370352 | orchestrator | 2026-01-09 00:45:47.370363 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:45:47.370375 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:45:47.370386 | orchestrator | 2026-01-09 00:45:47.370396 | orchestrator | 2026-01-09 00:45:47.370407 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:45:47.370418 | orchestrator | Friday 09 January 2026 00:45:47 +0000 (0:00:01.461) 0:00:22.506 ******** 2026-01-09 00:45:47.370429 | orchestrator | =============================================================================== 2026-01-09 00:45:47.370440 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.53s 2026-01-09 00:45:47.370451 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.34s 2026-01-09 00:45:47.370461 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.74s 2026-01-09 00:45:47.370472 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.46s 2026-01-09 00:45:47.370483 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.28s 2026-01-09 00:45:47.370516 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.22s 2026-01-09 00:45:47.370528 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.10s 2026-01-09 00:45:47.370539 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.04s 2026-01-09 00:45:47.370550 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.01s 2026-01-09 00:45:47.370561 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-01-09 00:45:47.370638 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.17s 2026-01-09 00:45:47.370652 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-01-09 00:45:47.678909 | orchestrator | 2026-01-09 00:45:47.680590 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Jan 9 00:45:47 UTC 2026 2026-01-09 00:45:47.680622 | orchestrator | 2026-01-09 00:45:49.692878 | orchestrator | 2026-01-09 00:45:49 | INFO  | Collection nutshell is prepared for execution 2026-01-09 00:45:49.693011 | orchestrator | 2026-01-09 00:45:49 | INFO  | A [0] - dotfiles 2026-01-09 00:45:59.721877 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [0] - homer 2026-01-09 00:45:59.721984 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [0] - netdata 2026-01-09 00:45:59.721997 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [0] - openstackclient 2026-01-09 00:45:59.722008 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [0] - phpmyadmin 2026-01-09 00:45:59.722063 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [0] - common 2026-01-09 00:45:59.725848 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [1] -- loadbalancer 2026-01-09 00:45:59.725997 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [2] --- opensearch 2026-01-09 00:45:59.726069 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [2] --- mariadb-ng 2026-01-09 00:45:59.726332 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [3] ---- horizon 2026-01-09 00:45:59.726663 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [3] ---- keystone 2026-01-09 00:45:59.726981 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [4] ----- neutron 2026-01-09 00:45:59.727258 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [5] ------ wait-for-nova 2026-01-09 00:45:59.727413 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [6] ------- octavia 2026-01-09 00:45:59.729785 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [4] ----- barbican 2026-01-09 00:45:59.730227 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [4] ----- designate 2026-01-09 00:45:59.730298 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [4] ----- ironic 2026-01-09 00:45:59.730305 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [4] ----- placement 2026-01-09 00:45:59.730310 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [4] ----- magnum 2026-01-09 00:45:59.731506 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [1] -- openvswitch 2026-01-09 00:45:59.731600 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [2] --- ovn 2026-01-09 00:45:59.731876 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [1] -- memcached 2026-01-09 00:45:59.731906 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [1] -- redis 2026-01-09 00:45:59.732035 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [1] -- rabbitmq-ng 2026-01-09 00:45:59.732264 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [0] - kubernetes 2026-01-09 00:45:59.734704 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [1] -- kubeconfig 2026-01-09 00:45:59.734816 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [1] -- copy-kubeconfig 2026-01-09 00:45:59.735015 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [0] - ceph 2026-01-09 00:45:59.737286 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [1] -- ceph-pools 2026-01-09 00:45:59.737325 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [2] --- copy-ceph-keys 2026-01-09 00:45:59.737337 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [3] ---- cephclient 2026-01-09 00:45:59.737367 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-09 00:45:59.737375 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [4] ----- wait-for-keystone 2026-01-09 00:45:59.737643 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-09 00:45:59.737663 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [5] ------ glance 2026-01-09 00:45:59.737769 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [5] ------ cinder 2026-01-09 00:45:59.737846 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [5] ------ nova 2026-01-09 00:45:59.738181 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [4] ----- prometheus 2026-01-09 00:45:59.738199 | orchestrator | 2026-01-09 00:45:59 | INFO  | A [5] ------ grafana 2026-01-09 00:45:59.932016 | orchestrator | 2026-01-09 00:45:59 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-09 00:45:59.932092 | orchestrator | 2026-01-09 00:45:59 | INFO  | Tasks are running in the background 2026-01-09 00:46:03.574883 | orchestrator | 2026-01-09 00:46:03 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-09 00:46:05.699251 | orchestrator | 2026-01-09 00:46:05 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:05.699352 | orchestrator | 2026-01-09 00:46:05 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:05.701154 | orchestrator | 2026-01-09 00:46:05 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:05.701650 | orchestrator | 2026-01-09 00:46:05 | INFO  | Task a2826b46-8aba-4974-8cd5-324ae4c0d64a is in state STARTED 2026-01-09 00:46:05.702191 | orchestrator | 2026-01-09 00:46:05 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:05.706206 | orchestrator | 2026-01-09 00:46:05 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:05.709434 | orchestrator | 2026-01-09 00:46:05 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:05.709490 | orchestrator | 2026-01-09 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:08.794055 | orchestrator | 2026-01-09 00:46:08 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:08.794141 | orchestrator | 2026-01-09 00:46:08 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:08.794148 | orchestrator | 2026-01-09 00:46:08 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:08.794152 | orchestrator | 2026-01-09 00:46:08 | INFO  | Task a2826b46-8aba-4974-8cd5-324ae4c0d64a is in state STARTED 2026-01-09 00:46:08.794157 | orchestrator | 2026-01-09 00:46:08 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:08.794176 | orchestrator | 2026-01-09 00:46:08 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:08.794180 | orchestrator | 2026-01-09 00:46:08 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:08.794185 | orchestrator | 2026-01-09 00:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:11.798777 | orchestrator | 2026-01-09 00:46:11 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:11.801218 | orchestrator | 2026-01-09 00:46:11 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:11.801725 | orchestrator | 2026-01-09 00:46:11 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:11.802303 | orchestrator | 2026-01-09 00:46:11 | INFO  | Task a2826b46-8aba-4974-8cd5-324ae4c0d64a is in state STARTED 2026-01-09 00:46:11.803374 | orchestrator | 2026-01-09 00:46:11 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:11.807009 | orchestrator | 2026-01-09 00:46:11 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:11.807697 | orchestrator | 2026-01-09 00:46:11 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:11.807817 | orchestrator | 2026-01-09 00:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:14.867720 | orchestrator | 2026-01-09 00:46:14 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:14.868396 | orchestrator | 2026-01-09 00:46:14 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:14.874385 | orchestrator | 2026-01-09 00:46:14 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:14.877481 | orchestrator | 2026-01-09 00:46:14 | INFO  | Task a2826b46-8aba-4974-8cd5-324ae4c0d64a is in state STARTED 2026-01-09 00:46:14.877875 | orchestrator | 2026-01-09 00:46:14 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:14.880272 | orchestrator | 2026-01-09 00:46:14 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:14.880820 | orchestrator | 2026-01-09 00:46:14 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:14.880848 | orchestrator | 2026-01-09 00:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:17.950406 | orchestrator | 2026-01-09 00:46:17 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:17.950525 | orchestrator | 2026-01-09 00:46:17 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:17.950542 | orchestrator | 2026-01-09 00:46:17 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:17.950633 | orchestrator | 2026-01-09 00:46:17 | INFO  | Task a2826b46-8aba-4974-8cd5-324ae4c0d64a is in state STARTED 2026-01-09 00:46:17.952622 | orchestrator | 2026-01-09 00:46:17 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:17.955955 | orchestrator | 2026-01-09 00:46:17 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:17.958758 | orchestrator | 2026-01-09 00:46:17 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:17.959624 | orchestrator | 2026-01-09 00:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:21.189910 | orchestrator | 2026-01-09 00:46:21 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:21.190149 | orchestrator | 2026-01-09 00:46:21 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:21.190170 | orchestrator | 2026-01-09 00:46:21 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:21.190185 | orchestrator | 2026-01-09 00:46:21 | INFO  | Task a2826b46-8aba-4974-8cd5-324ae4c0d64a is in state STARTED 2026-01-09 00:46:21.190200 | orchestrator | 2026-01-09 00:46:21 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:21.191773 | orchestrator | 2026-01-09 00:46:21 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:21.191821 | orchestrator | 2026-01-09 00:46:21 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:21.191838 | orchestrator | 2026-01-09 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:24.344188 | orchestrator | 2026-01-09 00:46:24 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:24.344293 | orchestrator | 2026-01-09 00:46:24 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:24.344302 | orchestrator | 2026-01-09 00:46:24 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:24.344332 | orchestrator | 2026-01-09 00:46:24 | INFO  | Task a2826b46-8aba-4974-8cd5-324ae4c0d64a is in state STARTED 2026-01-09 00:46:24.344339 | orchestrator | 2026-01-09 00:46:24 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:24.344345 | orchestrator | 2026-01-09 00:46:24 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:24.344351 | orchestrator | 2026-01-09 00:46:24 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:24.344358 | orchestrator | 2026-01-09 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:27.577098 | orchestrator | 2026-01-09 00:46:27 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:27.577241 | orchestrator | 2026-01-09 00:46:27 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:27.577262 | orchestrator | 2026-01-09 00:46:27 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:27.577275 | orchestrator | 2026-01-09 00:46:27 | INFO  | Task a2826b46-8aba-4974-8cd5-324ae4c0d64a is in state STARTED 2026-01-09 00:46:27.577309 | orchestrator | 2026-01-09 00:46:27 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:27.577353 | orchestrator | 2026-01-09 00:46:27 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:27.577372 | orchestrator | 2026-01-09 00:46:27 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:27.577390 | orchestrator | 2026-01-09 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:30.597040 | orchestrator | 2026-01-09 00:46:30.597129 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-09 00:46:30.597137 | orchestrator | 2026-01-09 00:46:30.597146 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-09 00:46:30.597155 | orchestrator | Friday 09 January 2026 00:46:15 +0000 (0:00:00.842) 0:00:00.842 ******** 2026-01-09 00:46:30.597163 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:46:30.597171 | orchestrator | changed: [testbed-manager] 2026-01-09 00:46:30.597178 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:46:30.597186 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:46:30.597194 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:46:30.597203 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:46:30.597211 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:46:30.597220 | orchestrator | 2026-01-09 00:46:30.597227 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-09 00:46:30.597233 | orchestrator | Friday 09 January 2026 00:46:19 +0000 (0:00:03.829) 0:00:04.672 ******** 2026-01-09 00:46:30.597239 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-09 00:46:30.597245 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-09 00:46:30.597250 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-09 00:46:30.597255 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-09 00:46:30.597261 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-09 00:46:30.597266 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-09 00:46:30.597271 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-09 00:46:30.597276 | orchestrator | 2026-01-09 00:46:30.597281 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-09 00:46:30.597287 | orchestrator | Friday 09 January 2026 00:46:20 +0000 (0:00:01.275) 0:00:05.947 ******** 2026-01-09 00:46:30.597303 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-09 00:46:20.668710', 'end': '2026-01-09 00:46:20.675288', 'delta': '0:00:00.006578', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-09 00:46:30.597330 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-09 00:46:20.500542', 'end': '2026-01-09 00:46:20.507124', 'delta': '0:00:00.006582', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-09 00:46:30.597336 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-09 00:46:20.524999', 'end': '2026-01-09 00:46:20.533329', 'delta': '0:00:00.008330', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-09 00:46:30.597361 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-09 00:46:20.535955', 'end': '2026-01-09 00:46:20.541904', 'delta': '0:00:00.005949', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-09 00:46:30.597367 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-09 00:46:20.514109', 'end': '2026-01-09 00:46:20.520477', 'delta': '0:00:00.006368', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-09 00:46:30.597384 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-09 00:46:20.697234', 'end': '2026-01-09 00:46:20.707777', 'delta': '0:00:00.010543', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-09 00:46:30.597390 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-09 00:46:20.526497', 'end': '2026-01-09 00:46:20.534090', 'delta': '0:00:00.007593', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-09 00:46:30.597395 | orchestrator | 2026-01-09 00:46:30.597400 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-09 00:46:30.597405 | orchestrator | Friday 09 January 2026 00:46:23 +0000 (0:00:02.581) 0:00:08.528 ******** 2026-01-09 00:46:30.597410 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-09 00:46:30.597414 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-09 00:46:30.597419 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-09 00:46:30.597424 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-09 00:46:30.597429 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-09 00:46:30.597434 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-09 00:46:30.597438 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-09 00:46:30.597443 | orchestrator | 2026-01-09 00:46:30.597448 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-09 00:46:30.597453 | orchestrator | Friday 09 January 2026 00:46:26 +0000 (0:00:02.584) 0:00:11.113 ******** 2026-01-09 00:46:30.597458 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-09 00:46:30.597463 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-09 00:46:30.597467 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-09 00:46:30.597472 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-09 00:46:30.597477 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-09 00:46:30.597481 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-09 00:46:30.597486 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-09 00:46:30.597491 | orchestrator | 2026-01-09 00:46:30.597496 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:46:30.597505 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:46:30.597513 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:46:30.597518 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:46:30.597528 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:46:30.597557 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:46:30.597563 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:46:30.597569 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:46:30.597575 | orchestrator | 2026-01-09 00:46:30.597580 | orchestrator | 2026-01-09 00:46:30.597586 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:46:30.597592 | orchestrator | Friday 09 January 2026 00:46:28 +0000 (0:00:02.736) 0:00:13.850 ******** 2026-01-09 00:46:30.597597 | orchestrator | =============================================================================== 2026-01-09 00:46:30.597603 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.83s 2026-01-09 00:46:30.597613 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.74s 2026-01-09 00:46:30.597619 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.58s 2026-01-09 00:46:30.597625 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.58s 2026-01-09 00:46:30.597631 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.28s 2026-01-09 00:46:30.597637 | orchestrator | 2026-01-09 00:46:30 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:30.597654 | orchestrator | 2026-01-09 00:46:30 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:30.597660 | orchestrator | 2026-01-09 00:46:30 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:30.597671 | orchestrator | 2026-01-09 00:46:30 | INFO  | Task a2826b46-8aba-4974-8cd5-324ae4c0d64a is in state SUCCESS 2026-01-09 00:46:30.597677 | orchestrator | 2026-01-09 00:46:30 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:30.597683 | orchestrator | 2026-01-09 00:46:30 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:46:30.597688 | orchestrator | 2026-01-09 00:46:30 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:30.597693 | orchestrator | 2026-01-09 00:46:30 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:30.597699 | orchestrator | 2026-01-09 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:33.657090 | orchestrator | 2026-01-09 00:46:33 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:33.660091 | orchestrator | 2026-01-09 00:46:33 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:33.661448 | orchestrator | 2026-01-09 00:46:33 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:33.663941 | orchestrator | 2026-01-09 00:46:33 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:33.666303 | orchestrator | 2026-01-09 00:46:33 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:46:33.668484 | orchestrator | 2026-01-09 00:46:33 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:33.669638 | orchestrator | 2026-01-09 00:46:33 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:33.671071 | orchestrator | 2026-01-09 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:36.813491 | orchestrator | 2026-01-09 00:46:36 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:36.813682 | orchestrator | 2026-01-09 00:46:36 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:36.813710 | orchestrator | 2026-01-09 00:46:36 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:36.814202 | orchestrator | 2026-01-09 00:46:36 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:36.814783 | orchestrator | 2026-01-09 00:46:36 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:46:36.815415 | orchestrator | 2026-01-09 00:46:36 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:36.816285 | orchestrator | 2026-01-09 00:46:36 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:36.816625 | orchestrator | 2026-01-09 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:40.258636 | orchestrator | 2026-01-09 00:46:40 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:40.258778 | orchestrator | 2026-01-09 00:46:40 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:40.258801 | orchestrator | 2026-01-09 00:46:40 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:40.258823 | orchestrator | 2026-01-09 00:46:40 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:40.258843 | orchestrator | 2026-01-09 00:46:40 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:46:40.258862 | orchestrator | 2026-01-09 00:46:40 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:40.258881 | orchestrator | 2026-01-09 00:46:40 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:40.258902 | orchestrator | 2026-01-09 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:43.319413 | orchestrator | 2026-01-09 00:46:43 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:43.324347 | orchestrator | 2026-01-09 00:46:43 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:43.325421 | orchestrator | 2026-01-09 00:46:43 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:43.326068 | orchestrator | 2026-01-09 00:46:43 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:43.327931 | orchestrator | 2026-01-09 00:46:43 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:46:43.331829 | orchestrator | 2026-01-09 00:46:43 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:43.334056 | orchestrator | 2026-01-09 00:46:43 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:43.334084 | orchestrator | 2026-01-09 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:46.550583 | orchestrator | 2026-01-09 00:46:46 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:46.551020 | orchestrator | 2026-01-09 00:46:46 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:46.551652 | orchestrator | 2026-01-09 00:46:46 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:46.552455 | orchestrator | 2026-01-09 00:46:46 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:46.553033 | orchestrator | 2026-01-09 00:46:46 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:46:46.554332 | orchestrator | 2026-01-09 00:46:46 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:46.554870 | orchestrator | 2026-01-09 00:46:46 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:46.554888 | orchestrator | 2026-01-09 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:49.731245 | orchestrator | 2026-01-09 00:46:49 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:49.731350 | orchestrator | 2026-01-09 00:46:49 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:49.731361 | orchestrator | 2026-01-09 00:46:49 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state STARTED 2026-01-09 00:46:49.731369 | orchestrator | 2026-01-09 00:46:49 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:49.731822 | orchestrator | 2026-01-09 00:46:49 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:46:49.731852 | orchestrator | 2026-01-09 00:46:49 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:49.731879 | orchestrator | 2026-01-09 00:46:49 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:49.731888 | orchestrator | 2026-01-09 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:52.958259 | orchestrator | 2026-01-09 00:46:52 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:52.958352 | orchestrator | 2026-01-09 00:46:52 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:52.958367 | orchestrator | 2026-01-09 00:46:52 | INFO  | Task b5822fee-f899-42be-8ca9-90ddb9a66ee2 is in state SUCCESS 2026-01-09 00:46:52.958379 | orchestrator | 2026-01-09 00:46:52 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:52.958390 | orchestrator | 2026-01-09 00:46:52 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:46:52.958402 | orchestrator | 2026-01-09 00:46:52 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:52.958413 | orchestrator | 2026-01-09 00:46:52 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:52.958424 | orchestrator | 2026-01-09 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:55.958300 | orchestrator | 2026-01-09 00:46:55 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:55.958400 | orchestrator | 2026-01-09 00:46:55 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:55.958424 | orchestrator | 2026-01-09 00:46:55 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:55.958445 | orchestrator | 2026-01-09 00:46:55 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:46:55.958463 | orchestrator | 2026-01-09 00:46:55 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:55.958483 | orchestrator | 2026-01-09 00:46:55 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:55.958501 | orchestrator | 2026-01-09 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:46:59.002415 | orchestrator | 2026-01-09 00:46:59 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:46:59.003075 | orchestrator | 2026-01-09 00:46:59 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state STARTED 2026-01-09 00:46:59.007340 | orchestrator | 2026-01-09 00:46:59 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:46:59.008295 | orchestrator | 2026-01-09 00:46:59 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:46:59.009364 | orchestrator | 2026-01-09 00:46:59 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:46:59.014161 | orchestrator | 2026-01-09 00:46:59 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:46:59.014182 | orchestrator | 2026-01-09 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:02.053888 | orchestrator | 2026-01-09 00:47:02 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:02.054198 | orchestrator | 2026-01-09 00:47:02 | INFO  | Task cc3a6130-7f5a-410a-abc8-4cb4354c8079 is in state SUCCESS 2026-01-09 00:47:02.055793 | orchestrator | 2026-01-09 00:47:02 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:02.057834 | orchestrator | 2026-01-09 00:47:02 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:02.061897 | orchestrator | 2026-01-09 00:47:02 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:02.070072 | orchestrator | 2026-01-09 00:47:02 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:02.070179 | orchestrator | 2026-01-09 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:05.114665 | orchestrator | 2026-01-09 00:47:05 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:05.115847 | orchestrator | 2026-01-09 00:47:05 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:05.119998 | orchestrator | 2026-01-09 00:47:05 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:05.123370 | orchestrator | 2026-01-09 00:47:05 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:05.125537 | orchestrator | 2026-01-09 00:47:05 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:05.125860 | orchestrator | 2026-01-09 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:08.163365 | orchestrator | 2026-01-09 00:47:08 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:08.163487 | orchestrator | 2026-01-09 00:47:08 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:08.166396 | orchestrator | 2026-01-09 00:47:08 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:08.166933 | orchestrator | 2026-01-09 00:47:08 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:08.167656 | orchestrator | 2026-01-09 00:47:08 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:08.169779 | orchestrator | 2026-01-09 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:11.218892 | orchestrator | 2026-01-09 00:47:11 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:11.220529 | orchestrator | 2026-01-09 00:47:11 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:11.222140 | orchestrator | 2026-01-09 00:47:11 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:11.224807 | orchestrator | 2026-01-09 00:47:11 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:11.225446 | orchestrator | 2026-01-09 00:47:11 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:11.225511 | orchestrator | 2026-01-09 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:14.259468 | orchestrator | 2026-01-09 00:47:14 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:14.268397 | orchestrator | 2026-01-09 00:47:14 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:14.275688 | orchestrator | 2026-01-09 00:47:14 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:14.276205 | orchestrator | 2026-01-09 00:47:14 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:14.281939 | orchestrator | 2026-01-09 00:47:14 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:14.282481 | orchestrator | 2026-01-09 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:17.322209 | orchestrator | 2026-01-09 00:47:17 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:17.323541 | orchestrator | 2026-01-09 00:47:17 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:17.327423 | orchestrator | 2026-01-09 00:47:17 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:17.328582 | orchestrator | 2026-01-09 00:47:17 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:17.330269 | orchestrator | 2026-01-09 00:47:17 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:17.330361 | orchestrator | 2026-01-09 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:20.393732 | orchestrator | 2026-01-09 00:47:20 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:20.394764 | orchestrator | 2026-01-09 00:47:20 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:20.395797 | orchestrator | 2026-01-09 00:47:20 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:20.397080 | orchestrator | 2026-01-09 00:47:20 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:20.398329 | orchestrator | 2026-01-09 00:47:20 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:20.398714 | orchestrator | 2026-01-09 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:23.474393 | orchestrator | 2026-01-09 00:47:23 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:23.475019 | orchestrator | 2026-01-09 00:47:23 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:23.475994 | orchestrator | 2026-01-09 00:47:23 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:23.477242 | orchestrator | 2026-01-09 00:47:23 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:23.478110 | orchestrator | 2026-01-09 00:47:23 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:23.478152 | orchestrator | 2026-01-09 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:26.616368 | orchestrator | 2026-01-09 00:47:26 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:26.617350 | orchestrator | 2026-01-09 00:47:26 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:26.619883 | orchestrator | 2026-01-09 00:47:26 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:26.619945 | orchestrator | 2026-01-09 00:47:26 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:26.619953 | orchestrator | 2026-01-09 00:47:26 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:26.619960 | orchestrator | 2026-01-09 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:29.676312 | orchestrator | 2026-01-09 00:47:29 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:29.677719 | orchestrator | 2026-01-09 00:47:29 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:29.679354 | orchestrator | 2026-01-09 00:47:29 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:29.680890 | orchestrator | 2026-01-09 00:47:29 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:29.682541 | orchestrator | 2026-01-09 00:47:29 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:29.682592 | orchestrator | 2026-01-09 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:32.734289 | orchestrator | 2026-01-09 00:47:32 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:32.734388 | orchestrator | 2026-01-09 00:47:32 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:32.734619 | orchestrator | 2026-01-09 00:47:32 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:32.735560 | orchestrator | 2026-01-09 00:47:32 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:32.736247 | orchestrator | 2026-01-09 00:47:32 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:32.736268 | orchestrator | 2026-01-09 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:35.780861 | orchestrator | 2026-01-09 00:47:35 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state STARTED 2026-01-09 00:47:35.782453 | orchestrator | 2026-01-09 00:47:35 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:35.783786 | orchestrator | 2026-01-09 00:47:35 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state STARTED 2026-01-09 00:47:35.785740 | orchestrator | 2026-01-09 00:47:35 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:35.787144 | orchestrator | 2026-01-09 00:47:35 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:35.787180 | orchestrator | 2026-01-09 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:38.854995 | orchestrator | 2026-01-09 00:47:38.855086 | orchestrator | 2026-01-09 00:47:38.855095 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-09 00:47:38.855101 | orchestrator | 2026-01-09 00:47:38.855106 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-09 00:47:38.855111 | orchestrator | Friday 09 January 2026 00:46:14 +0000 (0:00:01.013) 0:00:01.013 ******** 2026-01-09 00:47:38.855115 | orchestrator | ok: [testbed-manager] => { 2026-01-09 00:47:38.855122 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-09 00:47:38.855127 | orchestrator | } 2026-01-09 00:47:38.855131 | orchestrator | 2026-01-09 00:47:38.855135 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-09 00:47:38.855153 | orchestrator | Friday 09 January 2026 00:46:14 +0000 (0:00:00.509) 0:00:01.523 ******** 2026-01-09 00:47:38.855158 | orchestrator | ok: [testbed-manager] 2026-01-09 00:47:38.855163 | orchestrator | 2026-01-09 00:47:38.855167 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-09 00:47:38.855171 | orchestrator | Friday 09 January 2026 00:46:16 +0000 (0:00:02.312) 0:00:03.835 ******** 2026-01-09 00:47:38.855175 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-09 00:47:38.855179 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-09 00:47:38.855183 | orchestrator | 2026-01-09 00:47:38.855188 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-09 00:47:38.855196 | orchestrator | Friday 09 January 2026 00:46:18 +0000 (0:00:01.099) 0:00:04.934 ******** 2026-01-09 00:47:38.855200 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.855204 | orchestrator | 2026-01-09 00:47:38.855207 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-09 00:47:38.855211 | orchestrator | Friday 09 January 2026 00:46:20 +0000 (0:00:02.760) 0:00:07.695 ******** 2026-01-09 00:47:38.855215 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.855219 | orchestrator | 2026-01-09 00:47:38.855222 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-09 00:47:38.855226 | orchestrator | Friday 09 January 2026 00:46:22 +0000 (0:00:02.026) 0:00:09.722 ******** 2026-01-09 00:47:38.855230 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-09 00:47:38.855234 | orchestrator | ok: [testbed-manager] 2026-01-09 00:47:38.855238 | orchestrator | 2026-01-09 00:47:38.855242 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-09 00:47:38.855246 | orchestrator | Friday 09 January 2026 00:46:48 +0000 (0:00:25.852) 0:00:35.574 ******** 2026-01-09 00:47:38.855249 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.855253 | orchestrator | 2026-01-09 00:47:38.855257 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:47:38.855261 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:47:38.855268 | orchestrator | 2026-01-09 00:47:38.855271 | orchestrator | 2026-01-09 00:47:38.855275 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:47:38.855279 | orchestrator | Friday 09 January 2026 00:46:50 +0000 (0:00:02.105) 0:00:37.680 ******** 2026-01-09 00:47:38.855283 | orchestrator | =============================================================================== 2026-01-09 00:47:38.855318 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.85s 2026-01-09 00:47:38.855322 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.76s 2026-01-09 00:47:38.855327 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.32s 2026-01-09 00:47:38.855330 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.11s 2026-01-09 00:47:38.855334 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.03s 2026-01-09 00:47:38.855338 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.10s 2026-01-09 00:47:38.855342 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.51s 2026-01-09 00:47:38.855345 | orchestrator | 2026-01-09 00:47:38.855349 | orchestrator | 2026-01-09 00:47:38.855353 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-09 00:47:38.855356 | orchestrator | 2026-01-09 00:47:38.855360 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-09 00:47:38.855364 | orchestrator | Friday 09 January 2026 00:46:14 +0000 (0:00:00.337) 0:00:00.337 ******** 2026-01-09 00:47:38.855368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-09 00:47:38.855377 | orchestrator | 2026-01-09 00:47:38.855381 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-09 00:47:38.855385 | orchestrator | Friday 09 January 2026 00:46:14 +0000 (0:00:00.545) 0:00:00.882 ******** 2026-01-09 00:47:38.855389 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-09 00:47:38.855392 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-09 00:47:38.855396 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-09 00:47:38.855400 | orchestrator | 2026-01-09 00:47:38.855404 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-09 00:47:38.855407 | orchestrator | Friday 09 January 2026 00:46:16 +0000 (0:00:01.802) 0:00:02.685 ******** 2026-01-09 00:47:38.855411 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.855415 | orchestrator | 2026-01-09 00:47:38.855419 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-09 00:47:38.855422 | orchestrator | Friday 09 January 2026 00:46:18 +0000 (0:00:02.116) 0:00:04.801 ******** 2026-01-09 00:47:38.855437 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-09 00:47:38.855441 | orchestrator | ok: [testbed-manager] 2026-01-09 00:47:38.855445 | orchestrator | 2026-01-09 00:47:38.855448 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-09 00:47:38.855452 | orchestrator | Friday 09 January 2026 00:46:52 +0000 (0:00:33.978) 0:00:38.780 ******** 2026-01-09 00:47:38.855456 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.855475 | orchestrator | 2026-01-09 00:47:38.855479 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-09 00:47:38.855483 | orchestrator | Friday 09 January 2026 00:46:55 +0000 (0:00:02.592) 0:00:41.372 ******** 2026-01-09 00:47:38.855487 | orchestrator | ok: [testbed-manager] 2026-01-09 00:47:38.855490 | orchestrator | 2026-01-09 00:47:38.855494 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-09 00:47:38.855498 | orchestrator | Friday 09 January 2026 00:46:56 +0000 (0:00:01.320) 0:00:42.692 ******** 2026-01-09 00:47:38.855502 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.855506 | orchestrator | 2026-01-09 00:47:38.855509 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-09 00:47:38.855513 | orchestrator | Friday 09 January 2026 00:46:58 +0000 (0:00:01.597) 0:00:44.290 ******** 2026-01-09 00:47:38.855517 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.855520 | orchestrator | 2026-01-09 00:47:38.855524 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-09 00:47:38.855531 | orchestrator | Friday 09 January 2026 00:46:59 +0000 (0:00:01.397) 0:00:45.687 ******** 2026-01-09 00:47:38.855535 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.855538 | orchestrator | 2026-01-09 00:47:38.855542 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-09 00:47:38.855546 | orchestrator | Friday 09 January 2026 00:47:00 +0000 (0:00:00.681) 0:00:46.369 ******** 2026-01-09 00:47:38.855550 | orchestrator | ok: [testbed-manager] 2026-01-09 00:47:38.855553 | orchestrator | 2026-01-09 00:47:38.855558 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:47:38.855563 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:47:38.855567 | orchestrator | 2026-01-09 00:47:38.855572 | orchestrator | 2026-01-09 00:47:38.855576 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:47:38.855580 | orchestrator | Friday 09 January 2026 00:47:00 +0000 (0:00:00.458) 0:00:46.827 ******** 2026-01-09 00:47:38.855585 | orchestrator | =============================================================================== 2026-01-09 00:47:38.855589 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.98s 2026-01-09 00:47:38.855597 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.59s 2026-01-09 00:47:38.855602 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.12s 2026-01-09 00:47:38.855606 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.80s 2026-01-09 00:47:38.855610 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.60s 2026-01-09 00:47:38.855614 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.40s 2026-01-09 00:47:38.855619 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.32s 2026-01-09 00:47:38.855623 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.68s 2026-01-09 00:47:38.855627 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.55s 2026-01-09 00:47:38.855631 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.46s 2026-01-09 00:47:38.855636 | orchestrator | 2026-01-09 00:47:38.855640 | orchestrator | 2026-01-09 00:47:38.855644 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 00:47:38.855649 | orchestrator | 2026-01-09 00:47:38.855653 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 00:47:38.855657 | orchestrator | Friday 09 January 2026 00:46:12 +0000 (0:00:00.397) 0:00:00.397 ******** 2026-01-09 00:47:38.855661 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-09 00:47:38.855666 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-09 00:47:38.855670 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-09 00:47:38.855674 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-09 00:47:38.855679 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-09 00:47:38.855683 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-09 00:47:38.855687 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-09 00:47:38.855691 | orchestrator | 2026-01-09 00:47:38.855696 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-09 00:47:38.855700 | orchestrator | 2026-01-09 00:47:38.855704 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-09 00:47:38.855708 | orchestrator | Friday 09 January 2026 00:46:14 +0000 (0:00:02.508) 0:00:02.906 ******** 2026-01-09 00:47:38.855721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:47:38.855730 | orchestrator | 2026-01-09 00:47:38.855734 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-09 00:47:38.855739 | orchestrator | Friday 09 January 2026 00:46:16 +0000 (0:00:01.701) 0:00:04.607 ******** 2026-01-09 00:47:38.855743 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:47:38.855747 | orchestrator | ok: [testbed-manager] 2026-01-09 00:47:38.855752 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:47:38.855756 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:47:38.855760 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:47:38.855767 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:47:38.855771 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:47:38.855776 | orchestrator | 2026-01-09 00:47:38.855780 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-09 00:47:38.855784 | orchestrator | Friday 09 January 2026 00:46:18 +0000 (0:00:01.934) 0:00:06.542 ******** 2026-01-09 00:47:38.855789 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:47:38.855793 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:47:38.855797 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:47:38.855802 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:47:38.855806 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:47:38.855811 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:47:38.855815 | orchestrator | ok: [testbed-manager] 2026-01-09 00:47:38.855822 | orchestrator | 2026-01-09 00:47:38.855827 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-09 00:47:38.855831 | orchestrator | Friday 09 January 2026 00:46:21 +0000 (0:00:03.440) 0:00:09.982 ******** 2026-01-09 00:47:38.855836 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:47:38.855841 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:47:38.855845 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.855850 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:47:38.855854 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:47:38.855858 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:47:38.855863 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:47:38.855867 | orchestrator | 2026-01-09 00:47:38.855871 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-09 00:47:38.855875 | orchestrator | Friday 09 January 2026 00:46:24 +0000 (0:00:02.530) 0:00:12.513 ******** 2026-01-09 00:47:38.855879 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:47:38.855883 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:47:38.855887 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:47:38.855891 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:47:38.855894 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:47:38.855898 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:47:38.855902 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.855906 | orchestrator | 2026-01-09 00:47:38.855910 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-09 00:47:38.855913 | orchestrator | Friday 09 January 2026 00:46:37 +0000 (0:00:13.503) 0:00:26.017 ******** 2026-01-09 00:47:38.855917 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:47:38.855921 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:47:38.855925 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:47:38.855928 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:47:38.855932 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:47:38.855936 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:47:38.855939 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.855943 | orchestrator | 2026-01-09 00:47:38.855947 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-09 00:47:38.855951 | orchestrator | Friday 09 January 2026 00:47:15 +0000 (0:00:37.911) 0:01:03.928 ******** 2026-01-09 00:47:38.855956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:47:38.855962 | orchestrator | 2026-01-09 00:47:38.855965 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-09 00:47:38.855969 | orchestrator | Friday 09 January 2026 00:47:17 +0000 (0:00:01.893) 0:01:05.822 ******** 2026-01-09 00:47:38.855973 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-09 00:47:38.855977 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-09 00:47:38.855981 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-09 00:47:38.855985 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-09 00:47:38.855988 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-09 00:47:38.855992 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-09 00:47:38.855996 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-09 00:47:38.856000 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-09 00:47:38.856004 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-09 00:47:38.856007 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-09 00:47:38.856011 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-09 00:47:38.856015 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-09 00:47:38.856018 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-09 00:47:38.856022 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-09 00:47:38.856029 | orchestrator | 2026-01-09 00:47:38.856033 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-09 00:47:38.856036 | orchestrator | Friday 09 January 2026 00:47:22 +0000 (0:00:04.683) 0:01:10.506 ******** 2026-01-09 00:47:38.856040 | orchestrator | ok: [testbed-manager] 2026-01-09 00:47:38.856044 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:47:38.856048 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:47:38.856051 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:47:38.856055 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:47:38.856059 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:47:38.856063 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:47:38.856066 | orchestrator | 2026-01-09 00:47:38.856089 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-09 00:47:38.856093 | orchestrator | Friday 09 January 2026 00:47:23 +0000 (0:00:01.166) 0:01:11.672 ******** 2026-01-09 00:47:38.856097 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:47:38.856101 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:47:38.856104 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.856108 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:47:38.856112 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:47:38.856115 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:47:38.856119 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:47:38.856123 | orchestrator | 2026-01-09 00:47:38.856127 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-09 00:47:38.856133 | orchestrator | Friday 09 January 2026 00:47:25 +0000 (0:00:01.885) 0:01:13.558 ******** 2026-01-09 00:47:38.856137 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:47:38.856141 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:47:38.856144 | orchestrator | ok: [testbed-manager] 2026-01-09 00:47:38.856148 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:47:38.856152 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:47:38.856156 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:47:38.856159 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:47:38.856163 | orchestrator | 2026-01-09 00:47:38.856167 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-09 00:47:38.856171 | orchestrator | Friday 09 January 2026 00:47:27 +0000 (0:00:01.810) 0:01:15.369 ******** 2026-01-09 00:47:38.856174 | orchestrator | ok: [testbed-manager] 2026-01-09 00:47:38.856178 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:47:38.856182 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:47:38.856185 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:47:38.856189 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:47:38.856193 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:47:38.856197 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:47:38.856200 | orchestrator | 2026-01-09 00:47:38.856204 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-09 00:47:38.856208 | orchestrator | Friday 09 January 2026 00:47:28 +0000 (0:00:01.905) 0:01:17.274 ******** 2026-01-09 00:47:38.856212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-09 00:47:38.856219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:47:38.856223 | orchestrator | 2026-01-09 00:47:38.856227 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-09 00:47:38.856277 | orchestrator | Friday 09 January 2026 00:47:31 +0000 (0:00:02.129) 0:01:19.404 ******** 2026-01-09 00:47:38.856283 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.856287 | orchestrator | 2026-01-09 00:47:38.856291 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-09 00:47:38.856295 | orchestrator | Friday 09 January 2026 00:47:33 +0000 (0:00:02.572) 0:01:21.976 ******** 2026-01-09 00:47:38.856299 | orchestrator | changed: [testbed-manager] 2026-01-09 00:47:38.856309 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:47:38.856313 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:47:38.856317 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:47:38.856321 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:47:38.856325 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:47:38.856329 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:47:38.856333 | orchestrator | 2026-01-09 00:47:38.856336 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:47:38.856340 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:47:38.856344 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:47:38.856348 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:47:38.856352 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:47:38.856356 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:47:38.856360 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:47:38.856364 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:47:38.856368 | orchestrator | 2026-01-09 00:47:38.856372 | orchestrator | 2026-01-09 00:47:38.856376 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:47:38.856379 | orchestrator | Friday 09 January 2026 00:47:36 +0000 (0:00:03.095) 0:01:25.072 ******** 2026-01-09 00:47:38.856383 | orchestrator | =============================================================================== 2026-01-09 00:47:38.856387 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 37.91s 2026-01-09 00:47:38.856391 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.50s 2026-01-09 00:47:38.856395 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.68s 2026-01-09 00:47:38.856399 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.44s 2026-01-09 00:47:38.856403 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.10s 2026-01-09 00:47:38.856406 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.57s 2026-01-09 00:47:38.856410 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.53s 2026-01-09 00:47:38.856414 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.51s 2026-01-09 00:47:38.856418 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.13s 2026-01-09 00:47:38.856422 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.93s 2026-01-09 00:47:38.856426 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.91s 2026-01-09 00:47:38.856432 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.89s 2026-01-09 00:47:38.856436 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.89s 2026-01-09 00:47:38.856440 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.81s 2026-01-09 00:47:38.856444 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.70s 2026-01-09 00:47:38.856448 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.17s 2026-01-09 00:47:38.856451 | orchestrator | 2026-01-09 00:47:38 | INFO  | Task e626a98a-9b3a-4d34-9f1b-831b8e8148b3 is in state SUCCESS 2026-01-09 00:47:38.856484 | orchestrator | 2026-01-09 00:47:38 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:38.857660 | orchestrator | 2026-01-09 00:47:38 | INFO  | Task 8049554d-d082-4e65-9e72-16d44dbc48d0 is in state SUCCESS 2026-01-09 00:47:38.859862 | orchestrator | 2026-01-09 00:47:38 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:38.861888 | orchestrator | 2026-01-09 00:47:38 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:38.861962 | orchestrator | 2026-01-09 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:41.926325 | orchestrator | 2026-01-09 00:47:41 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:41.927515 | orchestrator | 2026-01-09 00:47:41 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:41.928697 | orchestrator | 2026-01-09 00:47:41 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:41.928807 | orchestrator | 2026-01-09 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:44.985659 | orchestrator | 2026-01-09 00:47:44 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:44.986549 | orchestrator | 2026-01-09 00:47:44 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:44.987611 | orchestrator | 2026-01-09 00:47:44 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:44.987762 | orchestrator | 2026-01-09 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:48.042666 | orchestrator | 2026-01-09 00:47:48 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:48.048936 | orchestrator | 2026-01-09 00:47:48 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:48.058476 | orchestrator | 2026-01-09 00:47:48 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:48.058594 | orchestrator | 2026-01-09 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:51.127502 | orchestrator | 2026-01-09 00:47:51 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:51.127575 | orchestrator | 2026-01-09 00:47:51 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:51.129159 | orchestrator | 2026-01-09 00:47:51 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:51.129220 | orchestrator | 2026-01-09 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:54.168821 | orchestrator | 2026-01-09 00:47:54 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:54.169750 | orchestrator | 2026-01-09 00:47:54 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:54.171760 | orchestrator | 2026-01-09 00:47:54 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:54.171945 | orchestrator | 2026-01-09 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:47:57.238358 | orchestrator | 2026-01-09 00:47:57 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:47:57.240998 | orchestrator | 2026-01-09 00:47:57 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:47:57.244827 | orchestrator | 2026-01-09 00:47:57 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:47:57.245094 | orchestrator | 2026-01-09 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:00.315177 | orchestrator | 2026-01-09 00:48:00 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:00.318361 | orchestrator | 2026-01-09 00:48:00 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:00.320559 | orchestrator | 2026-01-09 00:48:00 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:48:00.320615 | orchestrator | 2026-01-09 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:03.376113 | orchestrator | 2026-01-09 00:48:03 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:03.390399 | orchestrator | 2026-01-09 00:48:03 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:03.390480 | orchestrator | 2026-01-09 00:48:03 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:48:03.390489 | orchestrator | 2026-01-09 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:06.451563 | orchestrator | 2026-01-09 00:48:06 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:06.452837 | orchestrator | 2026-01-09 00:48:06 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:06.453818 | orchestrator | 2026-01-09 00:48:06 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:48:06.453856 | orchestrator | 2026-01-09 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:09.518908 | orchestrator | 2026-01-09 00:48:09 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:09.520971 | orchestrator | 2026-01-09 00:48:09 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:09.523957 | orchestrator | 2026-01-09 00:48:09 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:48:09.524016 | orchestrator | 2026-01-09 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:12.575118 | orchestrator | 2026-01-09 00:48:12 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:12.576822 | orchestrator | 2026-01-09 00:48:12 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:12.577618 | orchestrator | 2026-01-09 00:48:12 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:48:12.577654 | orchestrator | 2026-01-09 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:15.619495 | orchestrator | 2026-01-09 00:48:15 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:15.620262 | orchestrator | 2026-01-09 00:48:15 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:15.621972 | orchestrator | 2026-01-09 00:48:15 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:48:15.622069 | orchestrator | 2026-01-09 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:18.671623 | orchestrator | 2026-01-09 00:48:18 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:18.673641 | orchestrator | 2026-01-09 00:48:18 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:18.675613 | orchestrator | 2026-01-09 00:48:18 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state STARTED 2026-01-09 00:48:18.676183 | orchestrator | 2026-01-09 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:21.720388 | orchestrator | 2026-01-09 00:48:21 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:21.720972 | orchestrator | 2026-01-09 00:48:21 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:21.721637 | orchestrator | 2026-01-09 00:48:21 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:21.722422 | orchestrator | 2026-01-09 00:48:21 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:21.725867 | orchestrator | 2026-01-09 00:48:21 | INFO  | Task 20938748-b0f1-4e4e-a568-3957dd135fad is in state SUCCESS 2026-01-09 00:48:21.727736 | orchestrator | 2026-01-09 00:48:21.727847 | orchestrator | 2026-01-09 00:48:21.727859 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-09 00:48:21.727867 | orchestrator | 2026-01-09 00:48:21.727875 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-09 00:48:21.727882 | orchestrator | Friday 09 January 2026 00:46:34 +0000 (0:00:00.263) 0:00:00.263 ******** 2026-01-09 00:48:21.727889 | orchestrator | ok: [testbed-manager] 2026-01-09 00:48:21.727897 | orchestrator | 2026-01-09 00:48:21.727904 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-09 00:48:21.727911 | orchestrator | Friday 09 January 2026 00:46:36 +0000 (0:00:01.221) 0:00:01.484 ******** 2026-01-09 00:48:21.727918 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-09 00:48:21.727925 | orchestrator | 2026-01-09 00:48:21.727932 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-09 00:48:21.727938 | orchestrator | Friday 09 January 2026 00:46:36 +0000 (0:00:00.680) 0:00:02.165 ******** 2026-01-09 00:48:21.727979 | orchestrator | changed: [testbed-manager] 2026-01-09 00:48:21.727987 | orchestrator | 2026-01-09 00:48:21.727993 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-09 00:48:21.728000 | orchestrator | Friday 09 January 2026 00:46:38 +0000 (0:00:01.618) 0:00:03.784 ******** 2026-01-09 00:48:21.728007 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-09 00:48:21.728015 | orchestrator | ok: [testbed-manager] 2026-01-09 00:48:21.728021 | orchestrator | 2026-01-09 00:48:21.728028 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-09 00:48:21.728035 | orchestrator | Friday 09 January 2026 00:47:29 +0000 (0:00:51.236) 0:00:55.021 ******** 2026-01-09 00:48:21.728042 | orchestrator | changed: [testbed-manager] 2026-01-09 00:48:21.728050 | orchestrator | 2026-01-09 00:48:21.728057 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:48:21.728064 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:48:21.728072 | orchestrator | 2026-01-09 00:48:21.728079 | orchestrator | 2026-01-09 00:48:21.728086 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:48:21.728093 | orchestrator | Friday 09 January 2026 00:47:37 +0000 (0:00:07.848) 0:01:02.869 ******** 2026-01-09 00:48:21.728105 | orchestrator | =============================================================================== 2026-01-09 00:48:21.728112 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 51.24s 2026-01-09 00:48:21.728119 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.85s 2026-01-09 00:48:21.728125 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.62s 2026-01-09 00:48:21.728132 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.22s 2026-01-09 00:48:21.728142 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.68s 2026-01-09 00:48:21.728153 | orchestrator | 2026-01-09 00:48:21.728165 | orchestrator | 2026-01-09 00:48:21.728176 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-09 00:48:21.728188 | orchestrator | 2026-01-09 00:48:21.728200 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-09 00:48:21.728226 | orchestrator | Friday 09 January 2026 00:46:05 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-01-09 00:48:21.728238 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:48:21.728247 | orchestrator | 2026-01-09 00:48:21.728254 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-09 00:48:21.728261 | orchestrator | Friday 09 January 2026 00:46:06 +0000 (0:00:01.284) 0:00:01.550 ******** 2026-01-09 00:48:21.728268 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-09 00:48:21.728274 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-09 00:48:21.728281 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-09 00:48:21.728288 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-09 00:48:21.728295 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-09 00:48:21.728301 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-09 00:48:21.728308 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-09 00:48:21.728315 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-09 00:48:21.728321 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-09 00:48:21.728328 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-09 00:48:21.728335 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-09 00:48:21.728342 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-09 00:48:21.728349 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-09 00:48:21.728355 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-09 00:48:21.728362 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-09 00:48:21.728369 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-09 00:48:21.728387 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-09 00:48:21.728394 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-09 00:48:21.728401 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-09 00:48:21.728407 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-09 00:48:21.728446 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-09 00:48:21.728458 | orchestrator | 2026-01-09 00:48:21.728470 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-09 00:48:21.728482 | orchestrator | Friday 09 January 2026 00:46:10 +0000 (0:00:04.060) 0:00:05.610 ******** 2026-01-09 00:48:21.728489 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:48:21.728497 | orchestrator | 2026-01-09 00:48:21.728503 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-09 00:48:21.728510 | orchestrator | Friday 09 January 2026 00:46:12 +0000 (0:00:01.389) 0:00:07.000 ******** 2026-01-09 00:48:21.728520 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.728541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.728554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.728566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.728577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.728597 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.728623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.728647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728686 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728698 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728742 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.728813 | orchestrator | 2026-01-09 00:48:21.728820 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-09 00:48:21.728827 | orchestrator | Friday 09 January 2026 00:46:17 +0000 (0:00:05.154) 0:00:12.155 ******** 2026-01-09 00:48:21.728839 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.728847 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.728860 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.728867 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:48:21.728877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.728885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.728892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.728899 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:48:21.728906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.728913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.728925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.728937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.728945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.728954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.728962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.728969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.728976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.728983 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:48:21.728990 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:48:21.728997 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:48:21.729004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.729019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729034 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:48:21.729041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.729051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729065 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:48:21.729072 | orchestrator | 2026-01-09 00:48:21.729079 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-09 00:48:21.729085 | orchestrator | Friday 09 January 2026 00:46:18 +0000 (0:00:01.376) 0:00:13.531 ******** 2026-01-09 00:48:21.729092 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.729105 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729129 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729142 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:48:21.729149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.729156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729173 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:48:21.729180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.729187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729205 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:48:21.729212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.729844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729872 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:48:21.729879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.729889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729903 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:48:21.729910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.729926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729945 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:48:21.729952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-09 00:48:21.729959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.729976 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:48:21.729983 | orchestrator | 2026-01-09 00:48:21.729990 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-09 00:48:21.729997 | orchestrator | Friday 09 January 2026 00:46:21 +0000 (0:00:02.687) 0:00:16.218 ******** 2026-01-09 00:48:21.730003 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:48:21.730010 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:48:21.730072 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:48:21.730079 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:48:21.730085 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:48:21.730092 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:48:21.730099 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:48:21.730105 | orchestrator | 2026-01-09 00:48:21.730112 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-09 00:48:21.730119 | orchestrator | Friday 09 January 2026 00:46:22 +0000 (0:00:01.227) 0:00:17.446 ******** 2026-01-09 00:48:21.730126 | orchestrator | skipping: [testbed-manager] 2026-01-09 00:48:21.730132 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:48:21.730139 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:48:21.730150 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:48:21.730156 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:48:21.730163 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:48:21.730170 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:48:21.730176 | orchestrator | 2026-01-09 00:48:21.730183 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-09 00:48:21.730189 | orchestrator | Friday 09 January 2026 00:46:24 +0000 (0:00:01.531) 0:00:18.978 ******** 2026-01-09 00:48:21.730197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.730204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.730225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.730238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.730250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730262 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.730274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.730292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.730321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730329 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730382 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730405 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.730456 | orchestrator | 2026-01-09 00:48:21.730466 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-09 00:48:21.730474 | orchestrator | Friday 09 January 2026 00:46:32 +0000 (0:00:08.436) 0:00:27.415 ******** 2026-01-09 00:48:21.730482 | orchestrator | [WARNING]: Skipped 2026-01-09 00:48:21.730495 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-09 00:48:21.730503 | orchestrator | to this access issue: 2026-01-09 00:48:21.730514 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-09 00:48:21.730523 | orchestrator | directory 2026-01-09 00:48:21.730532 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 00:48:21.730540 | orchestrator | 2026-01-09 00:48:21.730548 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-09 00:48:21.730557 | orchestrator | Friday 09 January 2026 00:46:34 +0000 (0:00:02.305) 0:00:29.720 ******** 2026-01-09 00:48:21.730564 | orchestrator | [WARNING]: Skipped 2026-01-09 00:48:21.730572 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-09 00:48:21.730581 | orchestrator | to this access issue: 2026-01-09 00:48:21.730588 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-09 00:48:21.730596 | orchestrator | directory 2026-01-09 00:48:21.730605 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 00:48:21.730612 | orchestrator | 2026-01-09 00:48:21.730618 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-09 00:48:21.730625 | orchestrator | Friday 09 January 2026 00:46:35 +0000 (0:00:01.092) 0:00:30.812 ******** 2026-01-09 00:48:21.730631 | orchestrator | [WARNING]: Skipped 2026-01-09 00:48:21.730638 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-09 00:48:21.730645 | orchestrator | to this access issue: 2026-01-09 00:48:21.730651 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-09 00:48:21.730667 | orchestrator | directory 2026-01-09 00:48:21.730673 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 00:48:21.730679 | orchestrator | 2026-01-09 00:48:21.730685 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-09 00:48:21.730692 | orchestrator | Friday 09 January 2026 00:46:36 +0000 (0:00:01.033) 0:00:31.846 ******** 2026-01-09 00:48:21.730698 | orchestrator | [WARNING]: Skipped 2026-01-09 00:48:21.730704 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-09 00:48:21.730716 | orchestrator | to this access issue: 2026-01-09 00:48:21.730723 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-09 00:48:21.730729 | orchestrator | directory 2026-01-09 00:48:21.730735 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 00:48:21.730741 | orchestrator | 2026-01-09 00:48:21.730747 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-09 00:48:21.730754 | orchestrator | Friday 09 January 2026 00:46:38 +0000 (0:00:01.086) 0:00:32.932 ******** 2026-01-09 00:48:21.730760 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:48:21.730766 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:48:21.730772 | orchestrator | changed: [testbed-manager] 2026-01-09 00:48:21.730778 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:48:21.730784 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:48:21.730790 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:48:21.730797 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:48:21.730803 | orchestrator | 2026-01-09 00:48:21.730809 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-09 00:48:21.730815 | orchestrator | Friday 09 January 2026 00:46:43 +0000 (0:00:05.569) 0:00:38.502 ******** 2026-01-09 00:48:21.730821 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-09 00:48:21.730828 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-09 00:48:21.730834 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-09 00:48:21.730847 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-09 00:48:21.730857 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-09 00:48:21.730864 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-09 00:48:21.730870 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-09 00:48:21.730876 | orchestrator | 2026-01-09 00:48:21.730883 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-09 00:48:21.730889 | orchestrator | Friday 09 January 2026 00:46:46 +0000 (0:00:02.980) 0:00:41.482 ******** 2026-01-09 00:48:21.730895 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:48:21.730901 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:48:21.730907 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:48:21.730914 | orchestrator | changed: [testbed-manager] 2026-01-09 00:48:21.730920 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:48:21.730926 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:48:21.730932 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:48:21.730938 | orchestrator | 2026-01-09 00:48:21.730944 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-09 00:48:21.730950 | orchestrator | Friday 09 January 2026 00:46:49 +0000 (0:00:03.017) 0:00:44.500 ******** 2026-01-09 00:48:21.730957 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.730968 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.730980 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.730991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.731002 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731027 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731039 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.731063 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.731089 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731100 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731112 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.731134 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.731150 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731157 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731164 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:48:21.731183 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731190 | orchestrator | 2026-01-09 00:48:21.731196 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-09 00:48:21.731202 | orchestrator | Friday 09 January 2026 00:46:53 +0000 (0:00:04.083) 0:00:48.583 ******** 2026-01-09 00:48:21.731209 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-09 00:48:21.731215 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-09 00:48:21.731222 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-09 00:48:21.731231 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-09 00:48:21.731238 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-09 00:48:21.731244 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-09 00:48:21.731250 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-09 00:48:21.731257 | orchestrator | 2026-01-09 00:48:21.731263 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-09 00:48:21.731270 | orchestrator | Friday 09 January 2026 00:46:57 +0000 (0:00:03.930) 0:00:52.514 ******** 2026-01-09 00:48:21.731276 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-09 00:48:21.731282 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-09 00:48:21.731289 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-09 00:48:21.731295 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-09 00:48:21.731301 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-09 00:48:21.731308 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-09 00:48:21.731314 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-09 00:48:21.731320 | orchestrator | 2026-01-09 00:48:21.731326 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-09 00:48:21.731333 | orchestrator | Friday 09 January 2026 00:47:00 +0000 (0:00:03.038) 0:00:55.553 ******** 2026-01-09 00:48:21.731342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731349 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731384 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731400 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-09 00:48:21.731454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731461 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731494 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731500 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:48:21.731520 | orchestrator | 2026-01-09 00:48:21.731530 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-09 00:48:21.731536 | orchestrator | Friday 09 January 2026 00:47:04 +0000 (0:00:03.627) 0:00:59.181 ******** 2026-01-09 00:48:21.731543 | orchestrator | changed: [testbed-manager] 2026-01-09 00:48:21.731549 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:48:21.731555 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:48:21.731561 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:48:21.731567 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:48:21.731573 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:48:21.731580 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:48:21.731586 | orchestrator | 2026-01-09 00:48:21.731592 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-09 00:48:21.731599 | orchestrator | Friday 09 January 2026 00:47:05 +0000 (0:00:01.653) 0:01:00.834 ******** 2026-01-09 00:48:21.731605 | orchestrator | changed: [testbed-manager] 2026-01-09 00:48:21.731611 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:48:21.731617 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:48:21.731624 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:48:21.731630 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:48:21.731636 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:48:21.731642 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:48:21.731648 | orchestrator | 2026-01-09 00:48:21.731655 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-09 00:48:21.731661 | orchestrator | Friday 09 January 2026 00:47:07 +0000 (0:00:01.091) 0:01:01.925 ******** 2026-01-09 00:48:21.731667 | orchestrator | 2026-01-09 00:48:21.731674 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-09 00:48:21.731680 | orchestrator | Friday 09 January 2026 00:47:07 +0000 (0:00:00.063) 0:01:01.989 ******** 2026-01-09 00:48:21.731690 | orchestrator | 2026-01-09 00:48:21.731696 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-09 00:48:21.731702 | orchestrator | Friday 09 January 2026 00:47:07 +0000 (0:00:00.066) 0:01:02.056 ******** 2026-01-09 00:48:21.731709 | orchestrator | 2026-01-09 00:48:21.731715 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-09 00:48:21.731721 | orchestrator | Friday 09 January 2026 00:47:07 +0000 (0:00:00.175) 0:01:02.232 ******** 2026-01-09 00:48:21.731728 | orchestrator | 2026-01-09 00:48:21.731734 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-09 00:48:21.731740 | orchestrator | Friday 09 January 2026 00:47:07 +0000 (0:00:00.062) 0:01:02.295 ******** 2026-01-09 00:48:21.731746 | orchestrator | 2026-01-09 00:48:21.731753 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-09 00:48:21.731762 | orchestrator | Friday 09 January 2026 00:47:07 +0000 (0:00:00.057) 0:01:02.352 ******** 2026-01-09 00:48:21.731769 | orchestrator | 2026-01-09 00:48:21.731775 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-09 00:48:21.731781 | orchestrator | Friday 09 January 2026 00:47:07 +0000 (0:00:00.061) 0:01:02.413 ******** 2026-01-09 00:48:21.731788 | orchestrator | 2026-01-09 00:48:21.731794 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-09 00:48:21.731800 | orchestrator | Friday 09 January 2026 00:47:07 +0000 (0:00:00.082) 0:01:02.496 ******** 2026-01-09 00:48:21.731806 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:48:21.731812 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:48:21.731819 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:48:21.731825 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:48:21.731831 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:48:21.731838 | orchestrator | changed: [testbed-manager] 2026-01-09 00:48:21.731844 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:48:21.731850 | orchestrator | 2026-01-09 00:48:21.731857 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-09 00:48:21.731863 | orchestrator | Friday 09 January 2026 00:47:39 +0000 (0:00:31.887) 0:01:34.384 ******** 2026-01-09 00:48:21.731869 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:48:21.731875 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:48:21.731881 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:48:21.731888 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:48:21.731894 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:48:21.731900 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:48:21.731906 | orchestrator | changed: [testbed-manager] 2026-01-09 00:48:21.731912 | orchestrator | 2026-01-09 00:48:21.731919 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-09 00:48:21.731925 | orchestrator | Friday 09 January 2026 00:48:07 +0000 (0:00:28.012) 0:02:02.396 ******** 2026-01-09 00:48:21.731931 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:48:21.731937 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:48:21.731944 | orchestrator | ok: [testbed-manager] 2026-01-09 00:48:21.731950 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:48:21.731956 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:48:21.731963 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:48:21.731973 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:48:21.731984 | orchestrator | 2026-01-09 00:48:21.731996 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-09 00:48:21.732008 | orchestrator | Friday 09 January 2026 00:48:10 +0000 (0:00:03.083) 0:02:05.480 ******** 2026-01-09 00:48:21.732019 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:48:21.732031 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:48:21.732038 | orchestrator | changed: [testbed-manager] 2026-01-09 00:48:21.732044 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:48:21.732050 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:48:21.732056 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:48:21.732062 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:48:21.732072 | orchestrator | 2026-01-09 00:48:21.732079 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:48:21.732086 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-09 00:48:21.732092 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-09 00:48:21.732103 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-09 00:48:21.732110 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-09 00:48:21.732116 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-09 00:48:21.732122 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-09 00:48:21.732128 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-09 00:48:21.732135 | orchestrator | 2026-01-09 00:48:21.732141 | orchestrator | 2026-01-09 00:48:21.732147 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:48:21.732153 | orchestrator | Friday 09 January 2026 00:48:20 +0000 (0:00:09.604) 0:02:15.084 ******** 2026-01-09 00:48:21.732159 | orchestrator | =============================================================================== 2026-01-09 00:48:21.732166 | orchestrator | common : Restart fluentd container ------------------------------------- 31.89s 2026-01-09 00:48:21.732172 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 28.01s 2026-01-09 00:48:21.732178 | orchestrator | common : Restart cron container ----------------------------------------- 9.60s 2026-01-09 00:48:21.732184 | orchestrator | common : Copying over config.json files for services -------------------- 8.44s 2026-01-09 00:48:21.732190 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.57s 2026-01-09 00:48:21.732196 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.15s 2026-01-09 00:48:21.732202 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.08s 2026-01-09 00:48:21.732209 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.06s 2026-01-09 00:48:21.732218 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.93s 2026-01-09 00:48:21.732224 | orchestrator | common : Check common containers ---------------------------------------- 3.63s 2026-01-09 00:48:21.732231 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.08s 2026-01-09 00:48:21.732237 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.04s 2026-01-09 00:48:21.732243 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.02s 2026-01-09 00:48:21.732249 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.98s 2026-01-09 00:48:21.732255 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.69s 2026-01-09 00:48:21.732261 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.31s 2026-01-09 00:48:21.732267 | orchestrator | common : Creating log volume -------------------------------------------- 1.65s 2026-01-09 00:48:21.732274 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.53s 2026-01-09 00:48:21.732280 | orchestrator | common : include_tasks -------------------------------------------------- 1.39s 2026-01-09 00:48:21.732286 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.38s 2026-01-09 00:48:21.732292 | orchestrator | 2026-01-09 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:24.761000 | orchestrator | 2026-01-09 00:48:24 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:24.761603 | orchestrator | 2026-01-09 00:48:24 | INFO  | Task cd45bc81-9f77-422e-984f-16ce71e19f17 is in state STARTED 2026-01-09 00:48:24.762250 | orchestrator | 2026-01-09 00:48:24 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:24.763136 | orchestrator | 2026-01-09 00:48:24 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:24.766159 | orchestrator | 2026-01-09 00:48:24 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:24.766814 | orchestrator | 2026-01-09 00:48:24 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:24.766841 | orchestrator | 2026-01-09 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:27.813593 | orchestrator | 2026-01-09 00:48:27 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:27.814079 | orchestrator | 2026-01-09 00:48:27 | INFO  | Task cd45bc81-9f77-422e-984f-16ce71e19f17 is in state STARTED 2026-01-09 00:48:27.818604 | orchestrator | 2026-01-09 00:48:27 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:27.819022 | orchestrator | 2026-01-09 00:48:27 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:27.820070 | orchestrator | 2026-01-09 00:48:27 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:27.820928 | orchestrator | 2026-01-09 00:48:27 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:27.820950 | orchestrator | 2026-01-09 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:30.853188 | orchestrator | 2026-01-09 00:48:30 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:30.853802 | orchestrator | 2026-01-09 00:48:30 | INFO  | Task cd45bc81-9f77-422e-984f-16ce71e19f17 is in state STARTED 2026-01-09 00:48:30.854644 | orchestrator | 2026-01-09 00:48:30 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:30.855634 | orchestrator | 2026-01-09 00:48:30 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:30.856646 | orchestrator | 2026-01-09 00:48:30 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:30.857574 | orchestrator | 2026-01-09 00:48:30 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:30.857619 | orchestrator | 2026-01-09 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:33.886276 | orchestrator | 2026-01-09 00:48:33 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:33.886388 | orchestrator | 2026-01-09 00:48:33 | INFO  | Task cd45bc81-9f77-422e-984f-16ce71e19f17 is in state STARTED 2026-01-09 00:48:33.887228 | orchestrator | 2026-01-09 00:48:33 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:33.888115 | orchestrator | 2026-01-09 00:48:33 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:33.889166 | orchestrator | 2026-01-09 00:48:33 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:33.890121 | orchestrator | 2026-01-09 00:48:33 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:33.890162 | orchestrator | 2026-01-09 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:36.940429 | orchestrator | 2026-01-09 00:48:36 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:36.941453 | orchestrator | 2026-01-09 00:48:36 | INFO  | Task cd45bc81-9f77-422e-984f-16ce71e19f17 is in state STARTED 2026-01-09 00:48:36.944773 | orchestrator | 2026-01-09 00:48:36 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:36.946586 | orchestrator | 2026-01-09 00:48:36 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:36.947638 | orchestrator | 2026-01-09 00:48:36 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:36.948990 | orchestrator | 2026-01-09 00:48:36 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:36.949023 | orchestrator | 2026-01-09 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:39.995016 | orchestrator | 2026-01-09 00:48:39 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:39.995979 | orchestrator | 2026-01-09 00:48:39 | INFO  | Task cd45bc81-9f77-422e-984f-16ce71e19f17 is in state STARTED 2026-01-09 00:48:39.997103 | orchestrator | 2026-01-09 00:48:39 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:40.003263 | orchestrator | 2026-01-09 00:48:40 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:40.003837 | orchestrator | 2026-01-09 00:48:40 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:40.009926 | orchestrator | 2026-01-09 00:48:40 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:40.009988 | orchestrator | 2026-01-09 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:43.059595 | orchestrator | 2026-01-09 00:48:43 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:43.061720 | orchestrator | 2026-01-09 00:48:43 | INFO  | Task cd45bc81-9f77-422e-984f-16ce71e19f17 is in state STARTED 2026-01-09 00:48:43.062942 | orchestrator | 2026-01-09 00:48:43 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:43.065961 | orchestrator | 2026-01-09 00:48:43 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:43.069048 | orchestrator | 2026-01-09 00:48:43 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:43.069440 | orchestrator | 2026-01-09 00:48:43 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:43.069674 | orchestrator | 2026-01-09 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:46.122601 | orchestrator | 2026-01-09 00:48:46 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:46.125568 | orchestrator | 2026-01-09 00:48:46 | INFO  | Task cd45bc81-9f77-422e-984f-16ce71e19f17 is in state SUCCESS 2026-01-09 00:48:46.129101 | orchestrator | 2026-01-09 00:48:46 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:46.131766 | orchestrator | 2026-01-09 00:48:46 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:46.134601 | orchestrator | 2026-01-09 00:48:46 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:46.136866 | orchestrator | 2026-01-09 00:48:46 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:46.137252 | orchestrator | 2026-01-09 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:49.181308 | orchestrator | 2026-01-09 00:48:49 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:49.181453 | orchestrator | 2026-01-09 00:48:49 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:49.182136 | orchestrator | 2026-01-09 00:48:49 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:49.182901 | orchestrator | 2026-01-09 00:48:49 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:49.183942 | orchestrator | 2026-01-09 00:48:49 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:48:49.184912 | orchestrator | 2026-01-09 00:48:49 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:49.184952 | orchestrator | 2026-01-09 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:52.268865 | orchestrator | 2026-01-09 00:48:52 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:52.275543 | orchestrator | 2026-01-09 00:48:52 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:52.276927 | orchestrator | 2026-01-09 00:48:52 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:52.279850 | orchestrator | 2026-01-09 00:48:52 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:52.285276 | orchestrator | 2026-01-09 00:48:52 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:48:52.288955 | orchestrator | 2026-01-09 00:48:52 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:52.289002 | orchestrator | 2026-01-09 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:55.390812 | orchestrator | 2026-01-09 00:48:55 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:55.391112 | orchestrator | 2026-01-09 00:48:55 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:55.392045 | orchestrator | 2026-01-09 00:48:55 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:55.392601 | orchestrator | 2026-01-09 00:48:55 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:55.393789 | orchestrator | 2026-01-09 00:48:55 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:48:55.395929 | orchestrator | 2026-01-09 00:48:55 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:55.395972 | orchestrator | 2026-01-09 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:48:58.479740 | orchestrator | 2026-01-09 00:48:58 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:48:58.481732 | orchestrator | 2026-01-09 00:48:58 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:48:58.481811 | orchestrator | 2026-01-09 00:48:58 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:48:58.481885 | orchestrator | 2026-01-09 00:48:58 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:48:58.483163 | orchestrator | 2026-01-09 00:48:58 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:48:58.484101 | orchestrator | 2026-01-09 00:48:58 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:48:58.484230 | orchestrator | 2026-01-09 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:01.538099 | orchestrator | 2026-01-09 00:49:01 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:01.541459 | orchestrator | 2026-01-09 00:49:01 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state STARTED 2026-01-09 00:49:01.543326 | orchestrator | 2026-01-09 00:49:01 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:01.544850 | orchestrator | 2026-01-09 00:49:01 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:01.547951 | orchestrator | 2026-01-09 00:49:01 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:01.549960 | orchestrator | 2026-01-09 00:49:01 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:01.550615 | orchestrator | 2026-01-09 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:04.604696 | orchestrator | 2026-01-09 00:49:04 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:04.605678 | orchestrator | 2026-01-09 00:49:04 | INFO  | Task a3c93fe9-71d1-4bf7-b6c3-e969eafdb974 is in state SUCCESS 2026-01-09 00:49:04.606768 | orchestrator | 2026-01-09 00:49:04.606816 | orchestrator | 2026-01-09 00:49:04.606826 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 00:49:04.606835 | orchestrator | 2026-01-09 00:49:04.606841 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 00:49:04.606849 | orchestrator | Friday 09 January 2026 00:48:26 +0000 (0:00:00.344) 0:00:00.344 ******** 2026-01-09 00:49:04.606856 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:49:04.606864 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:49:04.606871 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:49:04.606877 | orchestrator | 2026-01-09 00:49:04.606884 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 00:49:04.606902 | orchestrator | Friday 09 January 2026 00:48:27 +0000 (0:00:00.576) 0:00:00.921 ******** 2026-01-09 00:49:04.606910 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-09 00:49:04.606917 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-09 00:49:04.606923 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-09 00:49:04.606929 | orchestrator | 2026-01-09 00:49:04.606936 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-09 00:49:04.606942 | orchestrator | 2026-01-09 00:49:04.606948 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-09 00:49:04.606955 | orchestrator | Friday 09 January 2026 00:48:28 +0000 (0:00:01.094) 0:00:02.016 ******** 2026-01-09 00:49:04.606962 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:49:04.606968 | orchestrator | 2026-01-09 00:49:04.606975 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-09 00:49:04.606982 | orchestrator | Friday 09 January 2026 00:48:29 +0000 (0:00:00.828) 0:00:02.845 ******** 2026-01-09 00:49:04.606989 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-09 00:49:04.606996 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-09 00:49:04.607002 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-09 00:49:04.607033 | orchestrator | 2026-01-09 00:49:04.607040 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-09 00:49:04.607046 | orchestrator | Friday 09 January 2026 00:48:30 +0000 (0:00:00.899) 0:00:03.744 ******** 2026-01-09 00:49:04.607053 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-09 00:49:04.607059 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-09 00:49:04.607066 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-09 00:49:04.607072 | orchestrator | 2026-01-09 00:49:04.607079 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-09 00:49:04.607101 | orchestrator | Friday 09 January 2026 00:48:32 +0000 (0:00:02.163) 0:00:05.908 ******** 2026-01-09 00:49:04.607108 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:49:04.607115 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:49:04.607121 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:49:04.607127 | orchestrator | 2026-01-09 00:49:04.607133 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-09 00:49:04.607140 | orchestrator | Friday 09 January 2026 00:48:34 +0000 (0:00:02.675) 0:00:08.583 ******** 2026-01-09 00:49:04.607146 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:49:04.607153 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:49:04.607159 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:49:04.607166 | orchestrator | 2026-01-09 00:49:04.607187 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:49:04.607203 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:49:04.607211 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:49:04.607218 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:49:04.607224 | orchestrator | 2026-01-09 00:49:04.607231 | orchestrator | 2026-01-09 00:49:04.607238 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:49:04.607244 | orchestrator | Friday 09 January 2026 00:48:44 +0000 (0:00:09.884) 0:00:18.467 ******** 2026-01-09 00:49:04.607251 | orchestrator | =============================================================================== 2026-01-09 00:49:04.607257 | orchestrator | memcached : Restart memcached container --------------------------------- 9.88s 2026-01-09 00:49:04.607263 | orchestrator | memcached : Check memcached container ----------------------------------- 2.67s 2026-01-09 00:49:04.607270 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.16s 2026-01-09 00:49:04.607276 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.09s 2026-01-09 00:49:04.607282 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.90s 2026-01-09 00:49:04.607289 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.83s 2026-01-09 00:49:04.607295 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.58s 2026-01-09 00:49:04.607301 | orchestrator | 2026-01-09 00:49:04.607307 | orchestrator | 2026-01-09 00:49:04.607314 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 00:49:04.607320 | orchestrator | 2026-01-09 00:49:04.607327 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 00:49:04.607334 | orchestrator | Friday 09 January 2026 00:48:27 +0000 (0:00:00.463) 0:00:00.463 ******** 2026-01-09 00:49:04.607340 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:49:04.607347 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:49:04.607353 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:49:04.607359 | orchestrator | 2026-01-09 00:49:04.607415 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 00:49:04.607435 | orchestrator | Friday 09 January 2026 00:48:27 +0000 (0:00:00.666) 0:00:01.129 ******** 2026-01-09 00:49:04.607442 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-09 00:49:04.607449 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-09 00:49:04.607456 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-09 00:49:04.607463 | orchestrator | 2026-01-09 00:49:04.607469 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-09 00:49:04.607476 | orchestrator | 2026-01-09 00:49:04.607483 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-09 00:49:04.607490 | orchestrator | Friday 09 January 2026 00:48:28 +0000 (0:00:01.016) 0:00:02.146 ******** 2026-01-09 00:49:04.607506 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:49:04.607513 | orchestrator | 2026-01-09 00:49:04.607520 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-09 00:49:04.607527 | orchestrator | Friday 09 January 2026 00:48:29 +0000 (0:00:00.776) 0:00:02.923 ******** 2026-01-09 00:49:04.607536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607604 | orchestrator | 2026-01-09 00:49:04.607612 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-09 00:49:04.607623 | orchestrator | Friday 09 January 2026 00:48:30 +0000 (0:00:01.275) 0:00:04.199 ******** 2026-01-09 00:49:04.607631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607685 | orchestrator | 2026-01-09 00:49:04.607692 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-09 00:49:04.607700 | orchestrator | Friday 09 January 2026 00:48:34 +0000 (0:00:03.080) 0:00:07.279 ******** 2026-01-09 00:49:04.607803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607902 | orchestrator | 2026-01-09 00:49:04.607914 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-09 00:49:04.607921 | orchestrator | Friday 09 January 2026 00:48:37 +0000 (0:00:03.013) 0:00:10.293 ******** 2026-01-09 00:49:04.607930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-09 00:49:04.607975 | orchestrator | 2026-01-09 00:49:04.607982 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-09 00:49:04.607988 | orchestrator | Friday 09 January 2026 00:48:38 +0000 (0:00:01.883) 0:00:12.176 ******** 2026-01-09 00:49:04.607994 | orchestrator | 2026-01-09 00:49:04.608001 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-09 00:49:04.608010 | orchestrator | Friday 09 January 2026 00:48:38 +0000 (0:00:00.074) 0:00:12.250 ******** 2026-01-09 00:49:04.608015 | orchestrator | 2026-01-09 00:49:04.608019 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-09 00:49:04.608023 | orchestrator | Friday 09 January 2026 00:48:39 +0000 (0:00:00.071) 0:00:12.322 ******** 2026-01-09 00:49:04.608027 | orchestrator | 2026-01-09 00:49:04.608030 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-09 00:49:04.608034 | orchestrator | Friday 09 January 2026 00:48:39 +0000 (0:00:00.078) 0:00:12.401 ******** 2026-01-09 00:49:04.608038 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:49:04.608042 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:49:04.608046 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:49:04.608050 | orchestrator | 2026-01-09 00:49:04.608054 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-09 00:49:04.608058 | orchestrator | Friday 09 January 2026 00:48:48 +0000 (0:00:09.315) 0:00:21.716 ******** 2026-01-09 00:49:04.608062 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:49:04.608066 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:49:04.608070 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:49:04.608074 | orchestrator | 2026-01-09 00:49:04.608078 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:49:04.608082 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:49:04.608087 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:49:04.608091 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:49:04.608095 | orchestrator | 2026-01-09 00:49:04.608099 | orchestrator | 2026-01-09 00:49:04.608103 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:49:04.608107 | orchestrator | Friday 09 January 2026 00:49:00 +0000 (0:00:12.505) 0:00:34.221 ******** 2026-01-09 00:49:04.608110 | orchestrator | =============================================================================== 2026-01-09 00:49:04.608114 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 12.51s 2026-01-09 00:49:04.608118 | orchestrator | redis : Restart redis container ----------------------------------------- 9.32s 2026-01-09 00:49:04.608123 | orchestrator | redis : Copying over default config.json files -------------------------- 3.08s 2026-01-09 00:49:04.608126 | orchestrator | redis : Copying over redis config files --------------------------------- 3.01s 2026-01-09 00:49:04.608130 | orchestrator | redis : Check redis containers ------------------------------------------ 1.88s 2026-01-09 00:49:04.608134 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.28s 2026-01-09 00:49:04.608143 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.02s 2026-01-09 00:49:04.608149 | orchestrator | redis : include_tasks --------------------------------------------------- 0.78s 2026-01-09 00:49:04.608153 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.67s 2026-01-09 00:49:04.608161 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2026-01-09 00:49:04.609149 | orchestrator | 2026-01-09 00:49:04 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:04.609922 | orchestrator | 2026-01-09 00:49:04 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:04.611957 | orchestrator | 2026-01-09 00:49:04 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:04.613129 | orchestrator | 2026-01-09 00:49:04 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:04.613172 | orchestrator | 2026-01-09 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:07.655864 | orchestrator | 2026-01-09 00:49:07 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:07.656028 | orchestrator | 2026-01-09 00:49:07 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:07.657946 | orchestrator | 2026-01-09 00:49:07 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:07.658146 | orchestrator | 2026-01-09 00:49:07 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:07.658842 | orchestrator | 2026-01-09 00:49:07 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:07.658862 | orchestrator | 2026-01-09 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:10.694516 | orchestrator | 2026-01-09 00:49:10 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:10.699923 | orchestrator | 2026-01-09 00:49:10 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:10.706861 | orchestrator | 2026-01-09 00:49:10 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:10.709450 | orchestrator | 2026-01-09 00:49:10 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:10.710230 | orchestrator | 2026-01-09 00:49:10 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:10.710761 | orchestrator | 2026-01-09 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:13.765010 | orchestrator | 2026-01-09 00:49:13 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:13.765076 | orchestrator | 2026-01-09 00:49:13 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:13.768857 | orchestrator | 2026-01-09 00:49:13 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:13.769682 | orchestrator | 2026-01-09 00:49:13 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:13.770992 | orchestrator | 2026-01-09 00:49:13 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:13.771029 | orchestrator | 2026-01-09 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:16.887467 | orchestrator | 2026-01-09 00:49:16 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:16.888395 | orchestrator | 2026-01-09 00:49:16 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:16.911052 | orchestrator | 2026-01-09 00:49:16 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:16.911129 | orchestrator | 2026-01-09 00:49:16 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:16.912105 | orchestrator | 2026-01-09 00:49:16 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:16.912162 | orchestrator | 2026-01-09 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:19.949819 | orchestrator | 2026-01-09 00:49:19 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:19.951840 | orchestrator | 2026-01-09 00:49:19 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:19.953141 | orchestrator | 2026-01-09 00:49:19 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:19.955097 | orchestrator | 2026-01-09 00:49:19 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:19.958160 | orchestrator | 2026-01-09 00:49:19 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:19.958213 | orchestrator | 2026-01-09 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:23.025381 | orchestrator | 2026-01-09 00:49:23 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:23.025451 | orchestrator | 2026-01-09 00:49:23 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:23.025461 | orchestrator | 2026-01-09 00:49:23 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:23.025469 | orchestrator | 2026-01-09 00:49:23 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:23.025477 | orchestrator | 2026-01-09 00:49:23 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:23.025484 | orchestrator | 2026-01-09 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:26.041815 | orchestrator | 2026-01-09 00:49:26 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:26.042723 | orchestrator | 2026-01-09 00:49:26 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:26.045127 | orchestrator | 2026-01-09 00:49:26 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:26.049952 | orchestrator | 2026-01-09 00:49:26 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:26.052437 | orchestrator | 2026-01-09 00:49:26 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:26.052610 | orchestrator | 2026-01-09 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:29.133457 | orchestrator | 2026-01-09 00:49:29 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:29.133735 | orchestrator | 2026-01-09 00:49:29 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:29.135405 | orchestrator | 2026-01-09 00:49:29 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:29.136627 | orchestrator | 2026-01-09 00:49:29 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:29.136817 | orchestrator | 2026-01-09 00:49:29 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:29.137058 | orchestrator | 2026-01-09 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:32.209862 | orchestrator | 2026-01-09 00:49:32 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:32.210404 | orchestrator | 2026-01-09 00:49:32 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:32.211280 | orchestrator | 2026-01-09 00:49:32 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:32.212098 | orchestrator | 2026-01-09 00:49:32 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:32.212968 | orchestrator | 2026-01-09 00:49:32 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:32.213071 | orchestrator | 2026-01-09 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:35.245596 | orchestrator | 2026-01-09 00:49:35 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:35.246137 | orchestrator | 2026-01-09 00:49:35 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:35.247108 | orchestrator | 2026-01-09 00:49:35 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:35.247951 | orchestrator | 2026-01-09 00:49:35 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:35.248880 | orchestrator | 2026-01-09 00:49:35 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:35.249007 | orchestrator | 2026-01-09 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:38.279413 | orchestrator | 2026-01-09 00:49:38 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:38.279941 | orchestrator | 2026-01-09 00:49:38 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:38.281240 | orchestrator | 2026-01-09 00:49:38 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:38.284109 | orchestrator | 2026-01-09 00:49:38 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:38.285605 | orchestrator | 2026-01-09 00:49:38 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state STARTED 2026-01-09 00:49:38.285811 | orchestrator | 2026-01-09 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:41.333972 | orchestrator | 2026-01-09 00:49:41 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:41.335639 | orchestrator | 2026-01-09 00:49:41 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:49:41.336012 | orchestrator | 2026-01-09 00:49:41 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:41.340088 | orchestrator | 2026-01-09 00:49:41 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:41.340223 | orchestrator | 2026-01-09 00:49:41 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:41.343784 | orchestrator | 2026-01-09 00:49:41 | INFO  | Task 3f05ea4e-0246-46f0-882a-8e89396634b5 is in state SUCCESS 2026-01-09 00:49:41.343876 | orchestrator | 2026-01-09 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:41.345101 | orchestrator | 2026-01-09 00:49:41.345167 | orchestrator | 2026-01-09 00:49:41.345175 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 00:49:41.345183 | orchestrator | 2026-01-09 00:49:41.345189 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 00:49:41.345196 | orchestrator | Friday 09 January 2026 00:48:26 +0000 (0:00:00.323) 0:00:00.323 ******** 2026-01-09 00:49:41.345203 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:49:41.345212 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:49:41.345218 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:49:41.345224 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:49:41.345230 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:49:41.345236 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:49:41.345243 | orchestrator | 2026-01-09 00:49:41.345249 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 00:49:41.345276 | orchestrator | Friday 09 January 2026 00:48:27 +0000 (0:00:00.932) 0:00:01.256 ******** 2026-01-09 00:49:41.345283 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-09 00:49:41.345290 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-09 00:49:41.345296 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-09 00:49:41.345303 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-09 00:49:41.345310 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-09 00:49:41.345316 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-09 00:49:41.345364 | orchestrator | 2026-01-09 00:49:41.345370 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-09 00:49:41.345373 | orchestrator | 2026-01-09 00:49:41.345378 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-09 00:49:41.345392 | orchestrator | Friday 09 January 2026 00:48:28 +0000 (0:00:01.333) 0:00:02.589 ******** 2026-01-09 00:49:41.345398 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:49:41.345403 | orchestrator | 2026-01-09 00:49:41.345407 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-09 00:49:41.345411 | orchestrator | Friday 09 January 2026 00:48:30 +0000 (0:00:01.740) 0:00:04.330 ******** 2026-01-09 00:49:41.345415 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-09 00:49:41.345419 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-09 00:49:41.345423 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-09 00:49:41.345427 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-09 00:49:41.345431 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-09 00:49:41.345435 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-09 00:49:41.345439 | orchestrator | 2026-01-09 00:49:41.345443 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-09 00:49:41.345447 | orchestrator | Friday 09 January 2026 00:48:32 +0000 (0:00:01.738) 0:00:06.068 ******** 2026-01-09 00:49:41.345452 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-09 00:49:41.345458 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-09 00:49:41.345464 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-09 00:49:41.345469 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-09 00:49:41.345475 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-09 00:49:41.345481 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-09 00:49:41.345487 | orchestrator | 2026-01-09 00:49:41.345493 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-09 00:49:41.345499 | orchestrator | Friday 09 January 2026 00:48:34 +0000 (0:00:02.341) 0:00:08.409 ******** 2026-01-09 00:49:41.345505 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-09 00:49:41.345513 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:49:41.345520 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-09 00:49:41.345526 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:49:41.345532 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-09 00:49:41.345539 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:49:41.345545 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-09 00:49:41.345551 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:49:41.345558 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-09 00:49:41.345564 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:49:41.345571 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-09 00:49:41.345588 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:49:41.345596 | orchestrator | 2026-01-09 00:49:41.345602 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-09 00:49:41.345608 | orchestrator | Friday 09 January 2026 00:48:35 +0000 (0:00:01.496) 0:00:09.906 ******** 2026-01-09 00:49:41.345615 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:49:41.345621 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:49:41.345626 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:49:41.345634 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:49:41.345638 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:49:41.345642 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:49:41.345646 | orchestrator | 2026-01-09 00:49:41.345650 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-09 00:49:41.345655 | orchestrator | Friday 09 January 2026 00:48:36 +0000 (0:00:00.939) 0:00:10.846 ******** 2026-01-09 00:49:41.345676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345715 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345741 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345762 | orchestrator | 2026-01-09 00:49:41.345767 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-09 00:49:41.345772 | orchestrator | Friday 09 January 2026 00:48:38 +0000 (0:00:01.896) 0:00:12.743 ******** 2026-01-09 00:49:41.345776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345863 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345868 | orchestrator | 2026-01-09 00:49:41.345872 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-09 00:49:41.345877 | orchestrator | Friday 09 January 2026 00:48:42 +0000 (0:00:03.287) 0:00:16.030 ******** 2026-01-09 00:49:41.345882 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:49:41.345886 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:49:41.345890 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:49:41.345895 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:49:41.345899 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:49:41.345903 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:49:41.345908 | orchestrator | 2026-01-09 00:49:41.345912 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-09 00:49:41.345917 | orchestrator | Friday 09 January 2026 00:48:43 +0000 (0:00:01.485) 0:00:17.516 ******** 2026-01-09 00:49:41.345924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345960 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345984 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-09 00:49:41.345991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.346004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.346009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-09 00:49:41.346012 | orchestrator | 2026-01-09 00:49:41.346076 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-09 00:49:41.346084 | orchestrator | Friday 09 January 2026 00:48:46 +0000 (0:00:02.855) 0:00:20.371 ******** 2026-01-09 00:49:41.346096 | orchestrator | 2026-01-09 00:49:41.346103 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-09 00:49:41.346109 | orchestrator | Friday 09 January 2026 00:48:46 +0000 (0:00:00.477) 0:00:20.849 ******** 2026-01-09 00:49:41.346116 | orchestrator | 2026-01-09 00:49:41.346126 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-09 00:49:41.346137 | orchestrator | Friday 09 January 2026 00:48:47 +0000 (0:00:00.316) 0:00:21.165 ******** 2026-01-09 00:49:41.346147 | orchestrator | 2026-01-09 00:49:41.346159 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-09 00:49:41.346170 | orchestrator | Friday 09 January 2026 00:48:47 +0000 (0:00:00.222) 0:00:21.388 ******** 2026-01-09 00:49:41.346178 | orchestrator | 2026-01-09 00:49:41.346184 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-09 00:49:41.346192 | orchestrator | Friday 09 January 2026 00:48:47 +0000 (0:00:00.208) 0:00:21.597 ******** 2026-01-09 00:49:41.346203 | orchestrator | 2026-01-09 00:49:41.346212 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-09 00:49:41.346220 | orchestrator | Friday 09 January 2026 00:48:47 +0000 (0:00:00.313) 0:00:21.910 ******** 2026-01-09 00:49:41.346226 | orchestrator | 2026-01-09 00:49:41.346232 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-09 00:49:41.346239 | orchestrator | Friday 09 January 2026 00:48:48 +0000 (0:00:00.337) 0:00:22.247 ******** 2026-01-09 00:49:41.346246 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:49:41.346252 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:49:41.346259 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:49:41.346265 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:49:41.346271 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:49:41.346277 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:49:41.346284 | orchestrator | 2026-01-09 00:49:41.346290 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-09 00:49:41.346298 | orchestrator | Friday 09 January 2026 00:48:57 +0000 (0:00:09.465) 0:00:31.713 ******** 2026-01-09 00:49:41.346305 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:49:41.346312 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:49:41.346318 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:49:41.346379 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:49:41.346385 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:49:41.346392 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:49:41.346397 | orchestrator | 2026-01-09 00:49:41.346403 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-09 00:49:41.346410 | orchestrator | Friday 09 January 2026 00:49:00 +0000 (0:00:02.294) 0:00:34.008 ******** 2026-01-09 00:49:41.346416 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:49:41.346422 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:49:41.346429 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:49:41.346435 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:49:41.346441 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:49:41.346447 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:49:41.346454 | orchestrator | 2026-01-09 00:49:41.346460 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-09 00:49:41.346466 | orchestrator | Friday 09 January 2026 00:49:11 +0000 (0:00:11.539) 0:00:45.547 ******** 2026-01-09 00:49:41.346473 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-09 00:49:41.346480 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-09 00:49:41.346486 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-09 00:49:41.346493 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-09 00:49:41.346506 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-09 00:49:41.346522 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-09 00:49:41.346526 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-09 00:49:41.346530 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-09 00:49:41.346534 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-09 00:49:41.346538 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-09 00:49:41.346542 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-09 00:49:41.346546 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-09 00:49:41.346550 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-09 00:49:41.346554 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-09 00:49:41.346565 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-09 00:49:41.346568 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-09 00:49:41.346575 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-09 00:49:41.346579 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-09 00:49:41.346583 | orchestrator | 2026-01-09 00:49:41.346587 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-09 00:49:41.346591 | orchestrator | Friday 09 January 2026 00:49:20 +0000 (0:00:09.343) 0:00:54.891 ******** 2026-01-09 00:49:41.346595 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-09 00:49:41.346599 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:49:41.346603 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-09 00:49:41.346607 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-09 00:49:41.346610 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:49:41.346614 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-09 00:49:41.346618 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:49:41.346622 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-09 00:49:41.346625 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-09 00:49:41.346629 | orchestrator | 2026-01-09 00:49:41.346633 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-09 00:49:41.346637 | orchestrator | Friday 09 January 2026 00:49:23 +0000 (0:00:02.484) 0:00:57.375 ******** 2026-01-09 00:49:41.346641 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-09 00:49:41.346645 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:49:41.346649 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-09 00:49:41.346652 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:49:41.346656 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-09 00:49:41.346660 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:49:41.346664 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-09 00:49:41.346667 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-09 00:49:41.346671 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-09 00:49:41.346675 | orchestrator | 2026-01-09 00:49:41.346684 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-09 00:49:41.346688 | orchestrator | Friday 09 January 2026 00:49:27 +0000 (0:00:04.344) 0:01:01.720 ******** 2026-01-09 00:49:41.346691 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:49:41.346695 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:49:41.346699 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:49:41.346703 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:49:41.346763 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:49:41.346772 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:49:41.346779 | orchestrator | 2026-01-09 00:49:41.346786 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:49:41.346792 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-09 00:49:41.346798 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-09 00:49:41.346802 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-09 00:49:41.346807 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-09 00:49:41.346814 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-09 00:49:41.346825 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-09 00:49:41.346832 | orchestrator | 2026-01-09 00:49:41.346839 | orchestrator | 2026-01-09 00:49:41.346845 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:49:41.346851 | orchestrator | Friday 09 January 2026 00:49:38 +0000 (0:00:10.368) 0:01:12.088 ******** 2026-01-09 00:49:41.346858 | orchestrator | =============================================================================== 2026-01-09 00:49:41.346863 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 21.91s 2026-01-09 00:49:41.346870 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.47s 2026-01-09 00:49:41.346876 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 9.34s 2026-01-09 00:49:41.346882 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.34s 2026-01-09 00:49:41.346889 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.29s 2026-01-09 00:49:41.346895 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.86s 2026-01-09 00:49:41.346900 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.48s 2026-01-09 00:49:41.346907 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.34s 2026-01-09 00:49:41.346913 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.29s 2026-01-09 00:49:41.346919 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.90s 2026-01-09 00:49:41.346925 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.88s 2026-01-09 00:49:41.346936 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.74s 2026-01-09 00:49:41.346943 | orchestrator | module-load : Load modules ---------------------------------------------- 1.74s 2026-01-09 00:49:41.346949 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.50s 2026-01-09 00:49:41.346955 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.49s 2026-01-09 00:49:41.346962 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.33s 2026-01-09 00:49:41.346968 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.94s 2026-01-09 00:49:41.346981 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2026-01-09 00:49:44.372778 | orchestrator | 2026-01-09 00:49:44 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:44.373184 | orchestrator | 2026-01-09 00:49:44 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:49:44.375541 | orchestrator | 2026-01-09 00:49:44 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:44.376212 | orchestrator | 2026-01-09 00:49:44 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:44.377284 | orchestrator | 2026-01-09 00:49:44 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:44.377314 | orchestrator | 2026-01-09 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:47.410717 | orchestrator | 2026-01-09 00:49:47 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:47.411495 | orchestrator | 2026-01-09 00:49:47 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:49:47.412097 | orchestrator | 2026-01-09 00:49:47 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:47.412960 | orchestrator | 2026-01-09 00:49:47 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:47.414831 | orchestrator | 2026-01-09 00:49:47 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:47.416677 | orchestrator | 2026-01-09 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:50.447896 | orchestrator | 2026-01-09 00:49:50 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:50.449125 | orchestrator | 2026-01-09 00:49:50 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:49:50.449804 | orchestrator | 2026-01-09 00:49:50 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:50.451653 | orchestrator | 2026-01-09 00:49:50 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:50.451979 | orchestrator | 2026-01-09 00:49:50 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:50.452010 | orchestrator | 2026-01-09 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:53.503997 | orchestrator | 2026-01-09 00:49:53 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:53.508108 | orchestrator | 2026-01-09 00:49:53 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:49:53.509541 | orchestrator | 2026-01-09 00:49:53 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:53.510892 | orchestrator | 2026-01-09 00:49:53 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:53.512486 | orchestrator | 2026-01-09 00:49:53 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:53.512548 | orchestrator | 2026-01-09 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:56.548149 | orchestrator | 2026-01-09 00:49:56 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:56.548842 | orchestrator | 2026-01-09 00:49:56 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:49:56.549505 | orchestrator | 2026-01-09 00:49:56 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:56.550343 | orchestrator | 2026-01-09 00:49:56 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:56.551211 | orchestrator | 2026-01-09 00:49:56 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:56.551232 | orchestrator | 2026-01-09 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:49:59.587925 | orchestrator | 2026-01-09 00:49:59 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:49:59.588500 | orchestrator | 2026-01-09 00:49:59 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:49:59.590149 | orchestrator | 2026-01-09 00:49:59 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:49:59.590976 | orchestrator | 2026-01-09 00:49:59 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:49:59.591903 | orchestrator | 2026-01-09 00:49:59 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:49:59.591934 | orchestrator | 2026-01-09 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:02.697880 | orchestrator | 2026-01-09 00:50:02 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:02.698206 | orchestrator | 2026-01-09 00:50:02 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:02.700640 | orchestrator | 2026-01-09 00:50:02 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:02.703533 | orchestrator | 2026-01-09 00:50:02 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:02.704894 | orchestrator | 2026-01-09 00:50:02 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:02.704964 | orchestrator | 2026-01-09 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:05.735253 | orchestrator | 2026-01-09 00:50:05 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:05.740148 | orchestrator | 2026-01-09 00:50:05 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:05.743504 | orchestrator | 2026-01-09 00:50:05 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:05.748929 | orchestrator | 2026-01-09 00:50:05 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:05.755077 | orchestrator | 2026-01-09 00:50:05 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:05.755130 | orchestrator | 2026-01-09 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:08.797778 | orchestrator | 2026-01-09 00:50:08 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:08.800747 | orchestrator | 2026-01-09 00:50:08 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:08.801835 | orchestrator | 2026-01-09 00:50:08 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:08.804141 | orchestrator | 2026-01-09 00:50:08 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:08.806457 | orchestrator | 2026-01-09 00:50:08 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:08.806505 | orchestrator | 2026-01-09 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:11.846315 | orchestrator | 2026-01-09 00:50:11 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:11.848820 | orchestrator | 2026-01-09 00:50:11 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:11.849824 | orchestrator | 2026-01-09 00:50:11 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:11.851691 | orchestrator | 2026-01-09 00:50:11 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:11.853244 | orchestrator | 2026-01-09 00:50:11 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:11.853299 | orchestrator | 2026-01-09 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:14.891544 | orchestrator | 2026-01-09 00:50:14 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:14.892949 | orchestrator | 2026-01-09 00:50:14 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:14.894140 | orchestrator | 2026-01-09 00:50:14 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:14.895368 | orchestrator | 2026-01-09 00:50:14 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:14.896871 | orchestrator | 2026-01-09 00:50:14 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:14.896952 | orchestrator | 2026-01-09 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:17.946651 | orchestrator | 2026-01-09 00:50:17 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:17.950454 | orchestrator | 2026-01-09 00:50:17 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:17.953538 | orchestrator | 2026-01-09 00:50:17 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:17.954741 | orchestrator | 2026-01-09 00:50:17 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:17.956726 | orchestrator | 2026-01-09 00:50:17 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:17.956800 | orchestrator | 2026-01-09 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:21.012198 | orchestrator | 2026-01-09 00:50:20 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:21.012261 | orchestrator | 2026-01-09 00:50:20 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:21.012267 | orchestrator | 2026-01-09 00:50:20 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:21.012271 | orchestrator | 2026-01-09 00:50:20 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:21.012288 | orchestrator | 2026-01-09 00:50:21 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:21.012296 | orchestrator | 2026-01-09 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:24.054749 | orchestrator | 2026-01-09 00:50:24 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:24.057022 | orchestrator | 2026-01-09 00:50:24 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:24.059190 | orchestrator | 2026-01-09 00:50:24 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:24.061175 | orchestrator | 2026-01-09 00:50:24 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:24.063867 | orchestrator | 2026-01-09 00:50:24 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:24.064166 | orchestrator | 2026-01-09 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:27.115780 | orchestrator | 2026-01-09 00:50:27 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:27.117460 | orchestrator | 2026-01-09 00:50:27 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:27.119886 | orchestrator | 2026-01-09 00:50:27 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:27.123386 | orchestrator | 2026-01-09 00:50:27 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:27.127197 | orchestrator | 2026-01-09 00:50:27 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:27.127259 | orchestrator | 2026-01-09 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:30.178001 | orchestrator | 2026-01-09 00:50:30 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:30.179315 | orchestrator | 2026-01-09 00:50:30 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:30.179905 | orchestrator | 2026-01-09 00:50:30 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:30.181126 | orchestrator | 2026-01-09 00:50:30 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:30.182524 | orchestrator | 2026-01-09 00:50:30 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:30.183878 | orchestrator | 2026-01-09 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:33.279861 | orchestrator | 2026-01-09 00:50:33 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:33.279920 | orchestrator | 2026-01-09 00:50:33 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:33.279928 | orchestrator | 2026-01-09 00:50:33 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:33.282204 | orchestrator | 2026-01-09 00:50:33 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:33.285454 | orchestrator | 2026-01-09 00:50:33 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:33.285520 | orchestrator | 2026-01-09 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:36.450246 | orchestrator | 2026-01-09 00:50:36 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:36.450336 | orchestrator | 2026-01-09 00:50:36 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:36.450341 | orchestrator | 2026-01-09 00:50:36 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:36.450345 | orchestrator | 2026-01-09 00:50:36 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:36.450349 | orchestrator | 2026-01-09 00:50:36 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:36.450352 | orchestrator | 2026-01-09 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:39.545424 | orchestrator | 2026-01-09 00:50:39 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:39.548971 | orchestrator | 2026-01-09 00:50:39 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:39.549410 | orchestrator | 2026-01-09 00:50:39 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:39.551190 | orchestrator | 2026-01-09 00:50:39 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:39.551522 | orchestrator | 2026-01-09 00:50:39 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:39.551597 | orchestrator | 2026-01-09 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:42.843611 | orchestrator | 2026-01-09 00:50:42 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:42.844110 | orchestrator | 2026-01-09 00:50:42 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:42.845752 | orchestrator | 2026-01-09 00:50:42 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:42.847241 | orchestrator | 2026-01-09 00:50:42 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:42.847977 | orchestrator | 2026-01-09 00:50:42 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:42.848001 | orchestrator | 2026-01-09 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:45.888652 | orchestrator | 2026-01-09 00:50:45 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:45.891549 | orchestrator | 2026-01-09 00:50:45 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:45.893997 | orchestrator | 2026-01-09 00:50:45 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:45.896208 | orchestrator | 2026-01-09 00:50:45 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state STARTED 2026-01-09 00:50:45.898955 | orchestrator | 2026-01-09 00:50:45 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:45.899086 | orchestrator | 2026-01-09 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:48.940012 | orchestrator | 2026-01-09 00:50:48 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:48.940688 | orchestrator | 2026-01-09 00:50:48 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:48.942790 | orchestrator | 2026-01-09 00:50:48 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:48.944999 | orchestrator | 2026-01-09 00:50:48 | INFO  | Task 7151ddd3-acb8-45e9-acff-10122ae50979 is in state SUCCESS 2026-01-09 00:50:48.946396 | orchestrator | 2026-01-09 00:50:48.946424 | orchestrator | 2026-01-09 00:50:48.946432 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-09 00:50:48.946440 | orchestrator | 2026-01-09 00:50:48.946446 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-09 00:50:48.946453 | orchestrator | Friday 09 January 2026 00:46:05 +0000 (0:00:00.184) 0:00:00.184 ******** 2026-01-09 00:50:48.946459 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:50:48.946466 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:50:48.946473 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:50:48.946479 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.946485 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.946492 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.946499 | orchestrator | 2026-01-09 00:50:48.946506 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-09 00:50:48.946512 | orchestrator | Friday 09 January 2026 00:46:06 +0000 (0:00:00.664) 0:00:00.849 ******** 2026-01-09 00:50:48.946520 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.946524 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.946528 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.946532 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.946536 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.946550 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.946554 | orchestrator | 2026-01-09 00:50:48.946572 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-09 00:50:48.946576 | orchestrator | Friday 09 January 2026 00:46:07 +0000 (0:00:00.664) 0:00:01.513 ******** 2026-01-09 00:50:48.946580 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.946584 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.946588 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.946591 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.946595 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.946599 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.946603 | orchestrator | 2026-01-09 00:50:48.946606 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-09 00:50:48.946610 | orchestrator | Friday 09 January 2026 00:46:07 +0000 (0:00:00.626) 0:00:02.140 ******** 2026-01-09 00:50:48.946614 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:50:48.946619 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:50:48.946622 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:50:48.946626 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.946630 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.946634 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.946637 | orchestrator | 2026-01-09 00:50:48.946641 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-09 00:50:48.946645 | orchestrator | Friday 09 January 2026 00:46:09 +0000 (0:00:02.183) 0:00:04.323 ******** 2026-01-09 00:50:48.946649 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:50:48.946653 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:50:48.946656 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:50:48.946660 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.946664 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.946670 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.946676 | orchestrator | 2026-01-09 00:50:48.946682 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-09 00:50:48.946716 | orchestrator | Friday 09 January 2026 00:46:11 +0000 (0:00:01.231) 0:00:05.555 ******** 2026-01-09 00:50:48.946723 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:50:48.946729 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:50:48.946732 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:50:48.946736 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.946740 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.946744 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.946747 | orchestrator | 2026-01-09 00:50:48.946751 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-09 00:50:48.946755 | orchestrator | Friday 09 January 2026 00:46:12 +0000 (0:00:01.289) 0:00:06.844 ******** 2026-01-09 00:50:48.946759 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.946762 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.946766 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.946770 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.946773 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.946777 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.946781 | orchestrator | 2026-01-09 00:50:48.946784 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-09 00:50:48.946788 | orchestrator | Friday 09 January 2026 00:46:13 +0000 (0:00:00.875) 0:00:07.719 ******** 2026-01-09 00:50:48.946792 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.946796 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.946799 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.946803 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.946807 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.946811 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.946814 | orchestrator | 2026-01-09 00:50:48.946818 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-09 00:50:48.946822 | orchestrator | Friday 09 January 2026 00:46:14 +0000 (0:00:00.741) 0:00:08.461 ******** 2026-01-09 00:50:48.946830 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-09 00:50:48.946834 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-09 00:50:48.946838 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.946841 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-09 00:50:48.946845 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-09 00:50:48.946849 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-09 00:50:48.946856 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-09 00:50:48.946863 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.946873 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-09 00:50:48.946879 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-09 00:50:48.946895 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.946902 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-09 00:50:48.946908 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-09 00:50:48.946915 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.946919 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.946922 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-09 00:50:48.946926 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-09 00:50:48.946930 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.946934 | orchestrator | 2026-01-09 00:50:48.946938 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-09 00:50:48.946941 | orchestrator | Friday 09 January 2026 00:46:14 +0000 (0:00:00.770) 0:00:09.232 ******** 2026-01-09 00:50:48.946946 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.946952 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.946957 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.946963 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.946973 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.946979 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.946985 | orchestrator | 2026-01-09 00:50:48.946992 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-09 00:50:48.947000 | orchestrator | Friday 09 January 2026 00:46:16 +0000 (0:00:01.281) 0:00:10.514 ******** 2026-01-09 00:50:48.947006 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:50:48.947013 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:50:48.947020 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:50:48.947026 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.947032 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.947039 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.947045 | orchestrator | 2026-01-09 00:50:48.947053 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-09 00:50:48.947059 | orchestrator | Friday 09 January 2026 00:46:16 +0000 (0:00:00.885) 0:00:11.400 ******** 2026-01-09 00:50:48.947067 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:50:48.947072 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.947076 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:50:48.947081 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.947086 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:50:48.947090 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.947094 | orchestrator | 2026-01-09 00:50:48.947099 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-09 00:50:48.947103 | orchestrator | Friday 09 January 2026 00:46:22 +0000 (0:00:05.278) 0:00:16.678 ******** 2026-01-09 00:50:48.947108 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.947112 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.947122 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.947126 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.947130 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947135 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947140 | orchestrator | 2026-01-09 00:50:48.947144 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-09 00:50:48.947149 | orchestrator | Friday 09 January 2026 00:46:24 +0000 (0:00:02.231) 0:00:18.909 ******** 2026-01-09 00:50:48.947153 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.947158 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.947162 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.947167 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.947172 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947178 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947188 | orchestrator | 2026-01-09 00:50:48.947195 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-09 00:50:48.947202 | orchestrator | Friday 09 January 2026 00:46:26 +0000 (0:00:02.420) 0:00:21.329 ******** 2026-01-09 00:50:48.947208 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.947213 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.947218 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.947224 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.947230 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947237 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947291 | orchestrator | 2026-01-09 00:50:48.947298 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-09 00:50:48.947304 | orchestrator | Friday 09 January 2026 00:46:28 +0000 (0:00:01.226) 0:00:22.556 ******** 2026-01-09 00:50:48.947311 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-09 00:50:48.947318 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-09 00:50:48.947324 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.947331 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-09 00:50:48.947337 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-09 00:50:48.947344 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.947350 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-09 00:50:48.947356 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-09 00:50:48.947363 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.947369 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-09 00:50:48.947375 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-09 00:50:48.947381 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.947388 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-09 00:50:48.947394 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-09 00:50:48.947401 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947407 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-09 00:50:48.947413 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-09 00:50:48.947420 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947427 | orchestrator | 2026-01-09 00:50:48.947433 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-09 00:50:48.947446 | orchestrator | Friday 09 January 2026 00:46:29 +0000 (0:00:01.665) 0:00:24.221 ******** 2026-01-09 00:50:48.947453 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.947459 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.947466 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.947472 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.947478 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947485 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947491 | orchestrator | 2026-01-09 00:50:48.947497 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-09 00:50:48.947509 | orchestrator | Friday 09 January 2026 00:46:31 +0000 (0:00:01.280) 0:00:25.501 ******** 2026-01-09 00:50:48.947515 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.947521 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.947527 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.947533 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.947539 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947545 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947551 | orchestrator | 2026-01-09 00:50:48.947556 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-09 00:50:48.947562 | orchestrator | 2026-01-09 00:50:48.947573 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-09 00:50:48.947579 | orchestrator | Friday 09 January 2026 00:46:33 +0000 (0:00:02.495) 0:00:27.997 ******** 2026-01-09 00:50:48.947586 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.947592 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.947598 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.947605 | orchestrator | 2026-01-09 00:50:48.947611 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-09 00:50:48.947618 | orchestrator | Friday 09 January 2026 00:46:36 +0000 (0:00:02.649) 0:00:30.647 ******** 2026-01-09 00:50:48.947624 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.947631 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.947637 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.947644 | orchestrator | 2026-01-09 00:50:48.947650 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-09 00:50:48.947656 | orchestrator | Friday 09 January 2026 00:46:37 +0000 (0:00:01.354) 0:00:32.001 ******** 2026-01-09 00:50:48.947663 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.947669 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.947676 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.947682 | orchestrator | 2026-01-09 00:50:48.947688 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-09 00:50:48.947695 | orchestrator | Friday 09 January 2026 00:46:38 +0000 (0:00:01.120) 0:00:33.122 ******** 2026-01-09 00:50:48.947701 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.947707 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.947712 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.947716 | orchestrator | 2026-01-09 00:50:48.947720 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-09 00:50:48.947724 | orchestrator | Friday 09 January 2026 00:46:39 +0000 (0:00:01.270) 0:00:34.392 ******** 2026-01-09 00:50:48.947728 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.947731 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947735 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947739 | orchestrator | 2026-01-09 00:50:48.947743 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-09 00:50:48.947747 | orchestrator | Friday 09 January 2026 00:46:40 +0000 (0:00:00.655) 0:00:35.048 ******** 2026-01-09 00:50:48.947751 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.947755 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.947758 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.947762 | orchestrator | 2026-01-09 00:50:48.947766 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-09 00:50:48.947770 | orchestrator | Friday 09 January 2026 00:46:42 +0000 (0:00:02.008) 0:00:37.057 ******** 2026-01-09 00:50:48.947774 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.947778 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.947782 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.947786 | orchestrator | 2026-01-09 00:50:48.947789 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-09 00:50:48.947793 | orchestrator | Friday 09 January 2026 00:46:44 +0000 (0:00:01.541) 0:00:38.598 ******** 2026-01-09 00:50:48.947797 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:50:48.947805 | orchestrator | 2026-01-09 00:50:48.947809 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-09 00:50:48.947813 | orchestrator | Friday 09 January 2026 00:46:44 +0000 (0:00:00.827) 0:00:39.426 ******** 2026-01-09 00:50:48.947817 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.947821 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.947824 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.947828 | orchestrator | 2026-01-09 00:50:48.947832 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-09 00:50:48.947836 | orchestrator | Friday 09 January 2026 00:46:47 +0000 (0:00:02.496) 0:00:41.922 ******** 2026-01-09 00:50:48.947840 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.947844 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947850 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947860 | orchestrator | 2026-01-09 00:50:48.947867 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-09 00:50:48.947873 | orchestrator | Friday 09 January 2026 00:46:48 +0000 (0:00:01.377) 0:00:43.299 ******** 2026-01-09 00:50:48.947880 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947886 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947893 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.947899 | orchestrator | 2026-01-09 00:50:48.947904 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-09 00:50:48.947907 | orchestrator | Friday 09 January 2026 00:46:50 +0000 (0:00:01.155) 0:00:44.454 ******** 2026-01-09 00:50:48.947911 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947915 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947919 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.947923 | orchestrator | 2026-01-09 00:50:48.947927 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-09 00:50:48.947934 | orchestrator | Friday 09 January 2026 00:46:51 +0000 (0:00:01.937) 0:00:46.392 ******** 2026-01-09 00:50:48.947938 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.947942 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947946 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947950 | orchestrator | 2026-01-09 00:50:48.947954 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-09 00:50:48.947960 | orchestrator | Friday 09 January 2026 00:46:52 +0000 (0:00:00.689) 0:00:47.081 ******** 2026-01-09 00:50:48.947967 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.947973 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.947979 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.947986 | orchestrator | 2026-01-09 00:50:48.947991 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-09 00:50:48.947994 | orchestrator | Friday 09 January 2026 00:46:53 +0000 (0:00:00.487) 0:00:47.569 ******** 2026-01-09 00:50:48.947998 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.948002 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.948006 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.948009 | orchestrator | 2026-01-09 00:50:48.948013 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-09 00:50:48.948020 | orchestrator | Friday 09 January 2026 00:46:55 +0000 (0:00:02.280) 0:00:49.849 ******** 2026-01-09 00:50:48.948024 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.948028 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.948032 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.948035 | orchestrator | 2026-01-09 00:50:48.948039 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-09 00:50:48.948043 | orchestrator | Friday 09 January 2026 00:46:58 +0000 (0:00:02.950) 0:00:52.800 ******** 2026-01-09 00:50:48.948047 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.948051 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.948055 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.948062 | orchestrator | 2026-01-09 00:50:48.948066 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-09 00:50:48.948070 | orchestrator | Friday 09 January 2026 00:46:59 +0000 (0:00:00.959) 0:00:53.760 ******** 2026-01-09 00:50:48.948074 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-09 00:50:48.948078 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-09 00:50:48.948082 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-09 00:50:48.948086 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-09 00:50:48.948090 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-09 00:50:48.948093 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-09 00:50:48.948097 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-09 00:50:48.948101 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-09 00:50:48.948105 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-09 00:50:48.948109 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-09 00:50:48.948112 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-09 00:50:48.948116 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-09 00:50:48.948120 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.948124 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.948128 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.948132 | orchestrator | 2026-01-09 00:50:48.948136 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-09 00:50:48.948139 | orchestrator | Friday 09 January 2026 00:47:42 +0000 (0:00:43.275) 0:01:37.036 ******** 2026-01-09 00:50:48.948143 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.948147 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.948151 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.948155 | orchestrator | 2026-01-09 00:50:48.948159 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-09 00:50:48.948162 | orchestrator | Friday 09 January 2026 00:47:43 +0000 (0:00:00.743) 0:01:37.780 ******** 2026-01-09 00:50:48.948166 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.948170 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.948174 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.948178 | orchestrator | 2026-01-09 00:50:48.948182 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-09 00:50:48.948185 | orchestrator | Friday 09 January 2026 00:47:44 +0000 (0:00:01.150) 0:01:38.931 ******** 2026-01-09 00:50:48.948189 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.948193 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.948197 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.948201 | orchestrator | 2026-01-09 00:50:48.948207 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-09 00:50:48.948211 | orchestrator | Friday 09 January 2026 00:47:45 +0000 (0:00:01.477) 0:01:40.408 ******** 2026-01-09 00:50:48.948217 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.948221 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.948224 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.948228 | orchestrator | 2026-01-09 00:50:48.948232 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-09 00:50:48.948236 | orchestrator | Friday 09 January 2026 00:48:10 +0000 (0:00:24.731) 0:02:05.140 ******** 2026-01-09 00:50:48.948255 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.948259 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.948263 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.948267 | orchestrator | 2026-01-09 00:50:48.948271 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-09 00:50:48.948275 | orchestrator | Friday 09 January 2026 00:48:11 +0000 (0:00:00.948) 0:02:06.089 ******** 2026-01-09 00:50:48.948278 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.948282 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.948286 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.948290 | orchestrator | 2026-01-09 00:50:48.948296 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-09 00:50:48.948300 | orchestrator | Friday 09 January 2026 00:48:12 +0000 (0:00:00.811) 0:02:06.900 ******** 2026-01-09 00:50:48.948304 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.948308 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.948311 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.948315 | orchestrator | 2026-01-09 00:50:48.948319 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-09 00:50:48.948323 | orchestrator | Friday 09 January 2026 00:48:13 +0000 (0:00:00.765) 0:02:07.665 ******** 2026-01-09 00:50:48.948327 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.948330 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.948334 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.948338 | orchestrator | 2026-01-09 00:50:48.948342 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-09 00:50:48.948346 | orchestrator | Friday 09 January 2026 00:48:14 +0000 (0:00:00.983) 0:02:08.649 ******** 2026-01-09 00:50:48.948349 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.948353 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.948357 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.948361 | orchestrator | 2026-01-09 00:50:48.948365 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-09 00:50:48.948369 | orchestrator | Friday 09 January 2026 00:48:14 +0000 (0:00:00.276) 0:02:08.925 ******** 2026-01-09 00:50:48.948372 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.948376 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.948380 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.948384 | orchestrator | 2026-01-09 00:50:48.948388 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-09 00:50:48.948391 | orchestrator | Friday 09 January 2026 00:48:15 +0000 (0:00:00.574) 0:02:09.500 ******** 2026-01-09 00:50:48.948397 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.948403 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.948409 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.948415 | orchestrator | 2026-01-09 00:50:48.948421 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-09 00:50:48.948427 | orchestrator | Friday 09 January 2026 00:48:15 +0000 (0:00:00.654) 0:02:10.154 ******** 2026-01-09 00:50:48.948434 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.948440 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.948447 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.948451 | orchestrator | 2026-01-09 00:50:48.948454 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-09 00:50:48.948458 | orchestrator | Friday 09 January 2026 00:48:16 +0000 (0:00:00.905) 0:02:11.060 ******** 2026-01-09 00:50:48.948462 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:50:48.948470 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:50:48.948473 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:50:48.948477 | orchestrator | 2026-01-09 00:50:48.948481 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-09 00:50:48.948485 | orchestrator | Friday 09 January 2026 00:48:17 +0000 (0:00:00.727) 0:02:11.788 ******** 2026-01-09 00:50:48.948488 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.948492 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.948496 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.948500 | orchestrator | 2026-01-09 00:50:48.948504 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-09 00:50:48.948507 | orchestrator | Friday 09 January 2026 00:48:17 +0000 (0:00:00.280) 0:02:12.068 ******** 2026-01-09 00:50:48.948511 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.948515 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.948519 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.948522 | orchestrator | 2026-01-09 00:50:48.948526 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-09 00:50:48.948530 | orchestrator | Friday 09 January 2026 00:48:17 +0000 (0:00:00.295) 0:02:12.364 ******** 2026-01-09 00:50:48.948534 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.948538 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.948542 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.948546 | orchestrator | 2026-01-09 00:50:48.948550 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-09 00:50:48.948553 | orchestrator | Friday 09 January 2026 00:48:18 +0000 (0:00:00.867) 0:02:13.232 ******** 2026-01-09 00:50:48.948557 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.948561 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.948565 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.948569 | orchestrator | 2026-01-09 00:50:48.948572 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-09 00:50:48.948576 | orchestrator | Friday 09 January 2026 00:48:19 +0000 (0:00:00.628) 0:02:13.861 ******** 2026-01-09 00:50:48.948580 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-09 00:50:48.948588 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-09 00:50:48.948595 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-09 00:50:48.948605 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-09 00:50:48.948611 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-09 00:50:48.948617 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-09 00:50:48.948623 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-09 00:50:48.948629 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-09 00:50:48.948636 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-09 00:50:48.948645 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-09 00:50:48.948652 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-09 00:50:48.948658 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-09 00:50:48.948664 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-09 00:50:48.948671 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-09 00:50:48.948677 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-09 00:50:48.948695 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-09 00:50:48.948700 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-09 00:50:48.948704 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-09 00:50:48.948708 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-09 00:50:48.948712 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-09 00:50:48.948716 | orchestrator | 2026-01-09 00:50:48.948719 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-09 00:50:48.948723 | orchestrator | 2026-01-09 00:50:48.948727 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-09 00:50:48.948731 | orchestrator | Friday 09 January 2026 00:48:22 +0000 (0:00:02.964) 0:02:16.826 ******** 2026-01-09 00:50:48.948735 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:50:48.948739 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:50:48.948742 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:50:48.948746 | orchestrator | 2026-01-09 00:50:48.948750 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-09 00:50:48.948754 | orchestrator | Friday 09 January 2026 00:48:22 +0000 (0:00:00.542) 0:02:17.369 ******** 2026-01-09 00:50:48.948758 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:50:48.948762 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:50:48.948765 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:50:48.948770 | orchestrator | 2026-01-09 00:50:48.948774 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-09 00:50:48.948777 | orchestrator | Friday 09 January 2026 00:48:23 +0000 (0:00:00.707) 0:02:18.076 ******** 2026-01-09 00:50:48.948781 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:50:48.948785 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:50:48.948789 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:50:48.948793 | orchestrator | 2026-01-09 00:50:48.948796 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-09 00:50:48.948800 | orchestrator | Friday 09 January 2026 00:48:24 +0000 (0:00:00.373) 0:02:18.450 ******** 2026-01-09 00:50:48.948804 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:50:48.948808 | orchestrator | 2026-01-09 00:50:48.948812 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-09 00:50:48.948816 | orchestrator | Friday 09 January 2026 00:48:24 +0000 (0:00:00.699) 0:02:19.149 ******** 2026-01-09 00:50:48.948820 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.948823 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.948827 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.948831 | orchestrator | 2026-01-09 00:50:48.948835 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-09 00:50:48.948839 | orchestrator | Friday 09 January 2026 00:48:25 +0000 (0:00:00.322) 0:02:19.472 ******** 2026-01-09 00:50:48.948842 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.948849 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.948858 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.948866 | orchestrator | 2026-01-09 00:50:48.948871 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-09 00:50:48.948878 | orchestrator | Friday 09 January 2026 00:48:25 +0000 (0:00:00.356) 0:02:19.829 ******** 2026-01-09 00:50:48.948884 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.948890 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.948896 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.948902 | orchestrator | 2026-01-09 00:50:48.948909 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-09 00:50:48.948915 | orchestrator | Friday 09 January 2026 00:48:25 +0000 (0:00:00.324) 0:02:20.154 ******** 2026-01-09 00:50:48.948926 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:50:48.948930 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:50:48.948934 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:50:48.948938 | orchestrator | 2026-01-09 00:50:48.948946 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-09 00:50:48.948950 | orchestrator | Friday 09 January 2026 00:48:26 +0000 (0:00:00.915) 0:02:21.069 ******** 2026-01-09 00:50:48.948953 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:50:48.948957 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:50:48.948961 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:50:48.948965 | orchestrator | 2026-01-09 00:50:48.948969 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-09 00:50:48.948973 | orchestrator | Friday 09 January 2026 00:48:27 +0000 (0:00:01.240) 0:02:22.310 ******** 2026-01-09 00:50:48.948976 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:50:48.948980 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:50:48.948984 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:50:48.948988 | orchestrator | 2026-01-09 00:50:48.948991 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-09 00:50:48.948995 | orchestrator | Friday 09 January 2026 00:48:29 +0000 (0:00:01.566) 0:02:23.876 ******** 2026-01-09 00:50:48.948999 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:50:48.949003 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:50:48.949007 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:50:48.949010 | orchestrator | 2026-01-09 00:50:48.949014 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-09 00:50:48.949018 | orchestrator | 2026-01-09 00:50:48.949022 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-09 00:50:48.949026 | orchestrator | Friday 09 January 2026 00:48:42 +0000 (0:00:13.382) 0:02:37.259 ******** 2026-01-09 00:50:48.949029 | orchestrator | ok: [testbed-manager] 2026-01-09 00:50:48.949033 | orchestrator | 2026-01-09 00:50:48.949037 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-09 00:50:48.949041 | orchestrator | Friday 09 January 2026 00:48:43 +0000 (0:00:01.024) 0:02:38.283 ******** 2026-01-09 00:50:48.949044 | orchestrator | changed: [testbed-manager] 2026-01-09 00:50:48.949048 | orchestrator | 2026-01-09 00:50:48.949052 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-09 00:50:48.949056 | orchestrator | Friday 09 January 2026 00:48:44 +0000 (0:00:00.545) 0:02:38.829 ******** 2026-01-09 00:50:48.949060 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-09 00:50:48.949064 | orchestrator | 2026-01-09 00:50:48.949067 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-09 00:50:48.949071 | orchestrator | Friday 09 January 2026 00:48:44 +0000 (0:00:00.567) 0:02:39.396 ******** 2026-01-09 00:50:48.949075 | orchestrator | changed: [testbed-manager] 2026-01-09 00:50:48.949079 | orchestrator | 2026-01-09 00:50:48.949082 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-09 00:50:48.949086 | orchestrator | Friday 09 January 2026 00:48:46 +0000 (0:00:01.106) 0:02:40.503 ******** 2026-01-09 00:50:48.949090 | orchestrator | changed: [testbed-manager] 2026-01-09 00:50:48.949094 | orchestrator | 2026-01-09 00:50:48.949098 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-09 00:50:48.949102 | orchestrator | Friday 09 January 2026 00:48:46 +0000 (0:00:00.600) 0:02:41.104 ******** 2026-01-09 00:50:48.949106 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-09 00:50:48.949109 | orchestrator | 2026-01-09 00:50:48.949113 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-09 00:50:48.949117 | orchestrator | Friday 09 January 2026 00:48:48 +0000 (0:00:01.919) 0:02:43.023 ******** 2026-01-09 00:50:48.949121 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-09 00:50:48.949125 | orchestrator | 2026-01-09 00:50:48.949129 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-09 00:50:48.949135 | orchestrator | Friday 09 January 2026 00:48:49 +0000 (0:00:00.957) 0:02:43.981 ******** 2026-01-09 00:50:48.949139 | orchestrator | changed: [testbed-manager] 2026-01-09 00:50:48.949143 | orchestrator | 2026-01-09 00:50:48.949147 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-09 00:50:48.949151 | orchestrator | Friday 09 January 2026 00:48:50 +0000 (0:00:00.898) 0:02:44.879 ******** 2026-01-09 00:50:48.949154 | orchestrator | changed: [testbed-manager] 2026-01-09 00:50:48.949158 | orchestrator | 2026-01-09 00:50:48.949162 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-09 00:50:48.949166 | orchestrator | 2026-01-09 00:50:48.949170 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-09 00:50:48.949174 | orchestrator | Friday 09 January 2026 00:48:50 +0000 (0:00:00.450) 0:02:45.330 ******** 2026-01-09 00:50:48.949177 | orchestrator | ok: [testbed-manager] 2026-01-09 00:50:48.949181 | orchestrator | 2026-01-09 00:50:48.949185 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-09 00:50:48.949189 | orchestrator | Friday 09 January 2026 00:48:51 +0000 (0:00:00.186) 0:02:45.516 ******** 2026-01-09 00:50:48.949192 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-09 00:50:48.949196 | orchestrator | 2026-01-09 00:50:48.949200 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-09 00:50:48.949204 | orchestrator | Friday 09 January 2026 00:48:51 +0000 (0:00:00.254) 0:02:45.771 ******** 2026-01-09 00:50:48.949207 | orchestrator | ok: [testbed-manager] 2026-01-09 00:50:48.949211 | orchestrator | 2026-01-09 00:50:48.949215 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-09 00:50:48.949219 | orchestrator | Friday 09 January 2026 00:48:52 +0000 (0:00:01.048) 0:02:46.820 ******** 2026-01-09 00:50:48.949222 | orchestrator | ok: [testbed-manager] 2026-01-09 00:50:48.949226 | orchestrator | 2026-01-09 00:50:48.949230 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-09 00:50:48.949234 | orchestrator | Friday 09 January 2026 00:48:54 +0000 (0:00:01.925) 0:02:48.746 ******** 2026-01-09 00:50:48.949249 | orchestrator | changed: [testbed-manager] 2026-01-09 00:50:48.949253 | orchestrator | 2026-01-09 00:50:48.949257 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-09 00:50:48.949261 | orchestrator | Friday 09 January 2026 00:48:55 +0000 (0:00:00.787) 0:02:49.533 ******** 2026-01-09 00:50:48.949265 | orchestrator | ok: [testbed-manager] 2026-01-09 00:50:48.949269 | orchestrator | 2026-01-09 00:50:48.949575 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-09 00:50:48.949585 | orchestrator | Friday 09 January 2026 00:48:55 +0000 (0:00:00.579) 0:02:50.112 ******** 2026-01-09 00:50:48.949589 | orchestrator | changed: [testbed-manager] 2026-01-09 00:50:48.949593 | orchestrator | 2026-01-09 00:50:48.949597 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-09 00:50:48.949600 | orchestrator | Friday 09 January 2026 00:49:05 +0000 (0:00:09.449) 0:02:59.562 ******** 2026-01-09 00:50:48.949604 | orchestrator | changed: [testbed-manager] 2026-01-09 00:50:48.949608 | orchestrator | 2026-01-09 00:50:48.949612 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-09 00:50:48.949616 | orchestrator | Friday 09 January 2026 00:49:21 +0000 (0:00:16.191) 0:03:15.754 ******** 2026-01-09 00:50:48.949619 | orchestrator | ok: [testbed-manager] 2026-01-09 00:50:48.949623 | orchestrator | 2026-01-09 00:50:48.949627 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-09 00:50:48.949631 | orchestrator | 2026-01-09 00:50:48.949634 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-09 00:50:48.949638 | orchestrator | Friday 09 January 2026 00:49:22 +0000 (0:00:00.826) 0:03:16.580 ******** 2026-01-09 00:50:48.949642 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.949646 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.949650 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.949658 | orchestrator | 2026-01-09 00:50:48.949661 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-09 00:50:48.949667 | orchestrator | Friday 09 January 2026 00:49:22 +0000 (0:00:00.450) 0:03:17.030 ******** 2026-01-09 00:50:48.949671 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.949675 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.949678 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.949682 | orchestrator | 2026-01-09 00:50:48.949686 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-09 00:50:48.949690 | orchestrator | Friday 09 January 2026 00:49:23 +0000 (0:00:00.526) 0:03:17.557 ******** 2026-01-09 00:50:48.949696 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:50:48.949702 | orchestrator | 2026-01-09 00:50:48.949708 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-09 00:50:48.949716 | orchestrator | Friday 09 January 2026 00:49:24 +0000 (0:00:00.956) 0:03:18.513 ******** 2026-01-09 00:50:48.949725 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-09 00:50:48.949731 | orchestrator | 2026-01-09 00:50:48.949737 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-09 00:50:48.949743 | orchestrator | Friday 09 January 2026 00:49:25 +0000 (0:00:00.984) 0:03:19.497 ******** 2026-01-09 00:50:48.949749 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 00:50:48.949755 | orchestrator | 2026-01-09 00:50:48.949761 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-09 00:50:48.949766 | orchestrator | Friday 09 January 2026 00:49:25 +0000 (0:00:00.933) 0:03:20.431 ******** 2026-01-09 00:50:48.949773 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.949779 | orchestrator | 2026-01-09 00:50:48.949785 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-09 00:50:48.949792 | orchestrator | Friday 09 January 2026 00:49:26 +0000 (0:00:00.123) 0:03:20.555 ******** 2026-01-09 00:50:48.949796 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 00:50:48.949800 | orchestrator | 2026-01-09 00:50:48.949804 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-09 00:50:48.949808 | orchestrator | Friday 09 January 2026 00:49:27 +0000 (0:00:01.065) 0:03:21.620 ******** 2026-01-09 00:50:48.949811 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.949815 | orchestrator | 2026-01-09 00:50:48.949819 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-09 00:50:48.949823 | orchestrator | Friday 09 January 2026 00:49:27 +0000 (0:00:00.134) 0:03:21.755 ******** 2026-01-09 00:50:48.949827 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.949831 | orchestrator | 2026-01-09 00:50:48.949834 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-09 00:50:48.949838 | orchestrator | Friday 09 January 2026 00:49:27 +0000 (0:00:00.130) 0:03:21.885 ******** 2026-01-09 00:50:48.949842 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.949846 | orchestrator | 2026-01-09 00:50:48.949850 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-09 00:50:48.949853 | orchestrator | Friday 09 January 2026 00:49:27 +0000 (0:00:00.134) 0:03:22.020 ******** 2026-01-09 00:50:48.949857 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.949861 | orchestrator | 2026-01-09 00:50:48.949865 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-09 00:50:48.949869 | orchestrator | Friday 09 January 2026 00:49:27 +0000 (0:00:00.139) 0:03:22.159 ******** 2026-01-09 00:50:48.949872 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-09 00:50:48.949877 | orchestrator | 2026-01-09 00:50:48.949880 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-09 00:50:48.949884 | orchestrator | Friday 09 January 2026 00:49:33 +0000 (0:00:05.737) 0:03:27.896 ******** 2026-01-09 00:50:48.949888 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-09 00:50:48.949896 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-09 00:50:48.949900 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-09 00:50:48.949904 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-09 00:50:48.949908 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-09 00:50:48.949911 | orchestrator | 2026-01-09 00:50:48.949916 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-09 00:50:48.949919 | orchestrator | Friday 09 January 2026 00:50:16 +0000 (0:00:42.733) 0:04:10.629 ******** 2026-01-09 00:50:48.950064 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 00:50:48.950072 | orchestrator | 2026-01-09 00:50:48.950075 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-09 00:50:48.950079 | orchestrator | Friday 09 January 2026 00:50:17 +0000 (0:00:01.369) 0:04:11.999 ******** 2026-01-09 00:50:48.950083 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-09 00:50:48.950087 | orchestrator | 2026-01-09 00:50:48.950091 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-09 00:50:48.950095 | orchestrator | Friday 09 January 2026 00:50:19 +0000 (0:00:01.697) 0:04:13.697 ******** 2026-01-09 00:50:48.950099 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-09 00:50:48.950103 | orchestrator | 2026-01-09 00:50:48.950107 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-09 00:50:48.950111 | orchestrator | Friday 09 January 2026 00:50:20 +0000 (0:00:01.410) 0:04:15.108 ******** 2026-01-09 00:50:48.950115 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.950119 | orchestrator | 2026-01-09 00:50:48.950123 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-09 00:50:48.950126 | orchestrator | Friday 09 January 2026 00:50:20 +0000 (0:00:00.122) 0:04:15.231 ******** 2026-01-09 00:50:48.950130 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-09 00:50:48.950134 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-09 00:50:48.950138 | orchestrator | 2026-01-09 00:50:48.950145 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-09 00:50:48.950149 | orchestrator | Friday 09 January 2026 00:50:22 +0000 (0:00:02.193) 0:04:17.425 ******** 2026-01-09 00:50:48.950153 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.950156 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.950160 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.950164 | orchestrator | 2026-01-09 00:50:48.950168 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-09 00:50:48.950172 | orchestrator | Friday 09 January 2026 00:50:23 +0000 (0:00:00.414) 0:04:17.839 ******** 2026-01-09 00:50:48.950176 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.950180 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.950183 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.950187 | orchestrator | 2026-01-09 00:50:48.950191 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-09 00:50:48.950195 | orchestrator | 2026-01-09 00:50:48.950199 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-09 00:50:48.950203 | orchestrator | Friday 09 January 2026 00:50:24 +0000 (0:00:01.310) 0:04:19.150 ******** 2026-01-09 00:50:48.950207 | orchestrator | ok: [testbed-manager] 2026-01-09 00:50:48.950210 | orchestrator | 2026-01-09 00:50:48.950214 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-09 00:50:48.950218 | orchestrator | Friday 09 January 2026 00:50:24 +0000 (0:00:00.151) 0:04:19.302 ******** 2026-01-09 00:50:48.950222 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-09 00:50:48.950226 | orchestrator | 2026-01-09 00:50:48.950230 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-09 00:50:48.950268 | orchestrator | Friday 09 January 2026 00:50:25 +0000 (0:00:00.310) 0:04:19.612 ******** 2026-01-09 00:50:48.950277 | orchestrator | changed: [testbed-manager] 2026-01-09 00:50:48.950283 | orchestrator | 2026-01-09 00:50:48.950289 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-09 00:50:48.950295 | orchestrator | 2026-01-09 00:50:48.950301 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-09 00:50:48.950308 | orchestrator | Friday 09 January 2026 00:50:32 +0000 (0:00:07.097) 0:04:26.710 ******** 2026-01-09 00:50:48.950314 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:50:48.950320 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:50:48.950324 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:50:48.950328 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:50:48.950331 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:50:48.950335 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:50:48.950339 | orchestrator | 2026-01-09 00:50:48.950343 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-09 00:50:48.950347 | orchestrator | Friday 09 January 2026 00:50:33 +0000 (0:00:00.999) 0:04:27.709 ******** 2026-01-09 00:50:48.950351 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-09 00:50:48.950354 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-09 00:50:48.950358 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-09 00:50:48.950362 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-09 00:50:48.950366 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-09 00:50:48.950370 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-09 00:50:48.950373 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-09 00:50:48.950377 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-09 00:50:48.950381 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-09 00:50:48.950385 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-09 00:50:48.950389 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-09 00:50:48.950393 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-09 00:50:48.950401 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-09 00:50:48.950405 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-09 00:50:48.950408 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-09 00:50:48.950413 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-09 00:50:48.950417 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-09 00:50:48.950420 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-09 00:50:48.950424 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-09 00:50:48.950428 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-09 00:50:48.950432 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-09 00:50:48.950436 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-09 00:50:48.950440 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-09 00:50:48.950443 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-09 00:50:48.950453 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-09 00:50:48.950457 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-09 00:50:48.950460 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-09 00:50:48.950464 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-09 00:50:48.950468 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-09 00:50:48.950472 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-09 00:50:48.950476 | orchestrator | 2026-01-09 00:50:48.950479 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-09 00:50:48.950483 | orchestrator | Friday 09 January 2026 00:50:46 +0000 (0:00:13.048) 0:04:40.758 ******** 2026-01-09 00:50:48.950487 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.950491 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.950495 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.950498 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.950502 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.950506 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.950510 | orchestrator | 2026-01-09 00:50:48.950514 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-09 00:50:48.950517 | orchestrator | Friday 09 January 2026 00:50:47 +0000 (0:00:00.774) 0:04:41.533 ******** 2026-01-09 00:50:48.950521 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:50:48.950525 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:50:48.950529 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:50:48.950533 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:50:48.950537 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:50:48.950540 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:50:48.950544 | orchestrator | 2026-01-09 00:50:48.950548 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:50:48.950552 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:50:48.950557 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-09 00:50:48.950561 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-09 00:50:48.950565 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-09 00:50:48.950569 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-09 00:50:48.950573 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-09 00:50:48.950576 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-09 00:50:48.950580 | orchestrator | 2026-01-09 00:50:48.950584 | orchestrator | 2026-01-09 00:50:48.950588 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:50:48.950592 | orchestrator | Friday 09 January 2026 00:50:47 +0000 (0:00:00.621) 0:04:42.155 ******** 2026-01-09 00:50:48.950595 | orchestrator | =============================================================================== 2026-01-09 00:50:48.950599 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.28s 2026-01-09 00:50:48.950603 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.73s 2026-01-09 00:50:48.950610 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.73s 2026-01-09 00:50:48.950616 | orchestrator | kubectl : Install required packages ------------------------------------ 16.19s 2026-01-09 00:50:48.950620 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 13.38s 2026-01-09 00:50:48.950623 | orchestrator | Manage labels ---------------------------------------------------------- 13.05s 2026-01-09 00:50:48.950627 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.45s 2026-01-09 00:50:48.950631 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 7.10s 2026-01-09 00:50:48.950634 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.74s 2026-01-09 00:50:48.950638 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.28s 2026-01-09 00:50:48.950642 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.96s 2026-01-09 00:50:48.950646 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.95s 2026-01-09 00:50:48.950649 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.65s 2026-01-09 00:50:48.950653 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.50s 2026-01-09 00:50:48.950657 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.50s 2026-01-09 00:50:48.950662 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.42s 2026-01-09 00:50:48.950668 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.28s 2026-01-09 00:50:48.950673 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.23s 2026-01-09 00:50:48.950678 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.19s 2026-01-09 00:50:48.950683 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.18s 2026-01-09 00:50:48.950687 | orchestrator | 2026-01-09 00:50:48 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:48.950692 | orchestrator | 2026-01-09 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:51.981438 | orchestrator | 2026-01-09 00:50:51 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:51.982498 | orchestrator | 2026-01-09 00:50:51 | INFO  | Task f72489ab-0d39-419e-a367-94dd3aaaf566 is in state STARTED 2026-01-09 00:50:51.983330 | orchestrator | 2026-01-09 00:50:51 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:51.984359 | orchestrator | 2026-01-09 00:50:51 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:51.985563 | orchestrator | 2026-01-09 00:50:51 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:51.986544 | orchestrator | 2026-01-09 00:50:51 | INFO  | Task 6092ebfb-f850-43e1-8bc4-eaa08639e4fc is in state STARTED 2026-01-09 00:50:51.988803 | orchestrator | 2026-01-09 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:55.034367 | orchestrator | 2026-01-09 00:50:55 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:55.034467 | orchestrator | 2026-01-09 00:50:55 | INFO  | Task f72489ab-0d39-419e-a367-94dd3aaaf566 is in state STARTED 2026-01-09 00:50:55.034481 | orchestrator | 2026-01-09 00:50:55 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:55.034504 | orchestrator | 2026-01-09 00:50:55 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:55.043087 | orchestrator | 2026-01-09 00:50:55 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:55.043207 | orchestrator | 2026-01-09 00:50:55 | INFO  | Task 6092ebfb-f850-43e1-8bc4-eaa08639e4fc is in state STARTED 2026-01-09 00:50:55.043217 | orchestrator | 2026-01-09 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:50:58.130965 | orchestrator | 2026-01-09 00:50:58 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:50:58.131482 | orchestrator | 2026-01-09 00:50:58 | INFO  | Task f72489ab-0d39-419e-a367-94dd3aaaf566 is in state STARTED 2026-01-09 00:50:58.132320 | orchestrator | 2026-01-09 00:50:58 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:50:58.133152 | orchestrator | 2026-01-09 00:50:58 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:50:58.133966 | orchestrator | 2026-01-09 00:50:58 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:50:58.134636 | orchestrator | 2026-01-09 00:50:58 | INFO  | Task 6092ebfb-f850-43e1-8bc4-eaa08639e4fc is in state SUCCESS 2026-01-09 00:50:58.134705 | orchestrator | 2026-01-09 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:01.167894 | orchestrator | 2026-01-09 00:51:01 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:01.167978 | orchestrator | 2026-01-09 00:51:01 | INFO  | Task f72489ab-0d39-419e-a367-94dd3aaaf566 is in state SUCCESS 2026-01-09 00:51:01.168915 | orchestrator | 2026-01-09 00:51:01 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:01.169668 | orchestrator | 2026-01-09 00:51:01 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:01.170446 | orchestrator | 2026-01-09 00:51:01 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:51:01.170486 | orchestrator | 2026-01-09 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:04.196623 | orchestrator | 2026-01-09 00:51:04 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:04.197338 | orchestrator | 2026-01-09 00:51:04 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:04.198625 | orchestrator | 2026-01-09 00:51:04 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:04.199842 | orchestrator | 2026-01-09 00:51:04 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:51:04.199868 | orchestrator | 2026-01-09 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:07.234243 | orchestrator | 2026-01-09 00:51:07 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:07.234473 | orchestrator | 2026-01-09 00:51:07 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:07.235508 | orchestrator | 2026-01-09 00:51:07 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:07.235981 | orchestrator | 2026-01-09 00:51:07 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:51:07.236017 | orchestrator | 2026-01-09 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:10.298694 | orchestrator | 2026-01-09 00:51:10 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:10.299726 | orchestrator | 2026-01-09 00:51:10 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:10.301309 | orchestrator | 2026-01-09 00:51:10 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:10.302742 | orchestrator | 2026-01-09 00:51:10 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:51:10.302814 | orchestrator | 2026-01-09 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:13.345100 | orchestrator | 2026-01-09 00:51:13 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:13.347596 | orchestrator | 2026-01-09 00:51:13 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:13.349032 | orchestrator | 2026-01-09 00:51:13 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:13.351190 | orchestrator | 2026-01-09 00:51:13 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:51:13.351302 | orchestrator | 2026-01-09 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:16.393358 | orchestrator | 2026-01-09 00:51:16 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:16.396197 | orchestrator | 2026-01-09 00:51:16 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:16.398341 | orchestrator | 2026-01-09 00:51:16 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:16.400556 | orchestrator | 2026-01-09 00:51:16 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:51:16.400610 | orchestrator | 2026-01-09 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:19.435240 | orchestrator | 2026-01-09 00:51:19 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:19.435725 | orchestrator | 2026-01-09 00:51:19 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:19.436909 | orchestrator | 2026-01-09 00:51:19 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:19.437826 | orchestrator | 2026-01-09 00:51:19 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:51:19.437860 | orchestrator | 2026-01-09 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:22.491092 | orchestrator | 2026-01-09 00:51:22 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:22.492905 | orchestrator | 2026-01-09 00:51:22 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:22.495395 | orchestrator | 2026-01-09 00:51:22 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:22.498429 | orchestrator | 2026-01-09 00:51:22 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state STARTED 2026-01-09 00:51:22.498665 | orchestrator | 2026-01-09 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:25.535991 | orchestrator | 2026-01-09 00:51:25 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:25.537618 | orchestrator | 2026-01-09 00:51:25 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:25.538536 | orchestrator | 2026-01-09 00:51:25 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:25.539599 | orchestrator | 2026-01-09 00:51:25 | INFO  | Task 6cbe074a-2f7b-4578-b075-f3a5b428a53a is in state SUCCESS 2026-01-09 00:51:25.540040 | orchestrator | 2026-01-09 00:51:25.540067 | orchestrator | 2026-01-09 00:51:25.540091 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-09 00:51:25.540097 | orchestrator | 2026-01-09 00:51:25.540109 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-09 00:51:25.540116 | orchestrator | Friday 09 January 2026 00:50:53 +0000 (0:00:00.161) 0:00:00.161 ******** 2026-01-09 00:51:25.540158 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-09 00:51:25.540166 | orchestrator | 2026-01-09 00:51:25.540172 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-09 00:51:25.540178 | orchestrator | Friday 09 January 2026 00:50:54 +0000 (0:00:00.830) 0:00:00.992 ******** 2026-01-09 00:51:25.540184 | orchestrator | changed: [testbed-manager] 2026-01-09 00:51:25.540205 | orchestrator | 2026-01-09 00:51:25.540211 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-09 00:51:25.540217 | orchestrator | Friday 09 January 2026 00:50:56 +0000 (0:00:01.515) 0:00:02.507 ******** 2026-01-09 00:51:25.540223 | orchestrator | changed: [testbed-manager] 2026-01-09 00:51:25.540228 | orchestrator | 2026-01-09 00:51:25.540234 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:51:25.540240 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:51:25.540248 | orchestrator | 2026-01-09 00:51:25.540254 | orchestrator | 2026-01-09 00:51:25.540260 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:51:25.540265 | orchestrator | Friday 09 January 2026 00:50:56 +0000 (0:00:00.417) 0:00:02.925 ******** 2026-01-09 00:51:25.540271 | orchestrator | =============================================================================== 2026-01-09 00:51:25.540276 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.52s 2026-01-09 00:51:25.540282 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.83s 2026-01-09 00:51:25.540289 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.42s 2026-01-09 00:51:25.540294 | orchestrator | 2026-01-09 00:51:25.540300 | orchestrator | 2026-01-09 00:51:25.540306 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-09 00:51:25.540312 | orchestrator | 2026-01-09 00:51:25.540318 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-09 00:51:25.540324 | orchestrator | Friday 09 January 2026 00:50:53 +0000 (0:00:00.199) 0:00:00.199 ******** 2026-01-09 00:51:25.540330 | orchestrator | ok: [testbed-manager] 2026-01-09 00:51:25.540338 | orchestrator | 2026-01-09 00:51:25.540381 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-09 00:51:25.540388 | orchestrator | Friday 09 January 2026 00:50:54 +0000 (0:00:00.597) 0:00:00.797 ******** 2026-01-09 00:51:25.540395 | orchestrator | ok: [testbed-manager] 2026-01-09 00:51:25.540401 | orchestrator | 2026-01-09 00:51:25.540407 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-09 00:51:25.540414 | orchestrator | Friday 09 January 2026 00:50:54 +0000 (0:00:00.830) 0:00:01.628 ******** 2026-01-09 00:51:25.540420 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-09 00:51:25.540427 | orchestrator | 2026-01-09 00:51:25.540435 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-09 00:51:25.540442 | orchestrator | Friday 09 January 2026 00:50:55 +0000 (0:00:00.822) 0:00:02.450 ******** 2026-01-09 00:51:25.540447 | orchestrator | changed: [testbed-manager] 2026-01-09 00:51:25.540517 | orchestrator | 2026-01-09 00:51:25.540521 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-09 00:51:25.540526 | orchestrator | Friday 09 January 2026 00:50:57 +0000 (0:00:01.438) 0:00:03.888 ******** 2026-01-09 00:51:25.540530 | orchestrator | changed: [testbed-manager] 2026-01-09 00:51:25.540534 | orchestrator | 2026-01-09 00:51:25.540538 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-09 00:51:25.540542 | orchestrator | Friday 09 January 2026 00:50:57 +0000 (0:00:00.490) 0:00:04.379 ******** 2026-01-09 00:51:25.540546 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-09 00:51:25.540550 | orchestrator | 2026-01-09 00:51:25.540554 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-09 00:51:25.540558 | orchestrator | Friday 09 January 2026 00:50:59 +0000 (0:00:01.443) 0:00:05.823 ******** 2026-01-09 00:51:25.540573 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-09 00:51:25.540577 | orchestrator | 2026-01-09 00:51:25.540580 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-09 00:51:25.540584 | orchestrator | Friday 09 January 2026 00:50:59 +0000 (0:00:00.777) 0:00:06.600 ******** 2026-01-09 00:51:25.540588 | orchestrator | ok: [testbed-manager] 2026-01-09 00:51:25.540592 | orchestrator | 2026-01-09 00:51:25.540596 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-09 00:51:25.540600 | orchestrator | Friday 09 January 2026 00:51:00 +0000 (0:00:00.411) 0:00:07.012 ******** 2026-01-09 00:51:25.540607 | orchestrator | ok: [testbed-manager] 2026-01-09 00:51:25.540613 | orchestrator | 2026-01-09 00:51:25.540619 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:51:25.540629 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:51:25.540635 | orchestrator | 2026-01-09 00:51:25.540643 | orchestrator | 2026-01-09 00:51:25.540649 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:51:25.540655 | orchestrator | Friday 09 January 2026 00:51:00 +0000 (0:00:00.329) 0:00:07.341 ******** 2026-01-09 00:51:25.540681 | orchestrator | =============================================================================== 2026-01-09 00:51:25.540688 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.44s 2026-01-09 00:51:25.540694 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.44s 2026-01-09 00:51:25.540701 | orchestrator | Create .kube directory -------------------------------------------------- 0.83s 2026-01-09 00:51:25.540729 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.82s 2026-01-09 00:51:25.540737 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.78s 2026-01-09 00:51:25.540744 | orchestrator | Get home directory of operator user ------------------------------------- 0.60s 2026-01-09 00:51:25.540751 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.49s 2026-01-09 00:51:25.540758 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.41s 2026-01-09 00:51:25.540765 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.33s 2026-01-09 00:51:25.540772 | orchestrator | 2026-01-09 00:51:25.540926 | orchestrator | 2026-01-09 00:51:25.540934 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-09 00:51:25.540938 | orchestrator | 2026-01-09 00:51:25.540942 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-09 00:51:25.540946 | orchestrator | Friday 09 January 2026 00:48:55 +0000 (0:00:00.077) 0:00:00.077 ******** 2026-01-09 00:51:25.540950 | orchestrator | ok: [localhost] => { 2026-01-09 00:51:25.540956 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-09 00:51:25.540961 | orchestrator | } 2026-01-09 00:51:25.540965 | orchestrator | 2026-01-09 00:51:25.540970 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-09 00:51:25.540974 | orchestrator | Friday 09 January 2026 00:48:56 +0000 (0:00:00.040) 0:00:00.118 ******** 2026-01-09 00:51:25.540979 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-09 00:51:25.540985 | orchestrator | ...ignoring 2026-01-09 00:51:25.540989 | orchestrator | 2026-01-09 00:51:25.540993 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-09 00:51:25.540997 | orchestrator | Friday 09 January 2026 00:48:58 +0000 (0:00:02.946) 0:00:03.065 ******** 2026-01-09 00:51:25.541001 | orchestrator | skipping: [localhost] 2026-01-09 00:51:25.541005 | orchestrator | 2026-01-09 00:51:25.541009 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-09 00:51:25.541020 | orchestrator | Friday 09 January 2026 00:48:59 +0000 (0:00:00.073) 0:00:03.139 ******** 2026-01-09 00:51:25.541024 | orchestrator | ok: [localhost] 2026-01-09 00:51:25.541028 | orchestrator | 2026-01-09 00:51:25.541032 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 00:51:25.541036 | orchestrator | 2026-01-09 00:51:25.541040 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 00:51:25.541044 | orchestrator | Friday 09 January 2026 00:48:59 +0000 (0:00:00.185) 0:00:03.324 ******** 2026-01-09 00:51:25.541048 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:51:25.541052 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:51:25.541056 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:51:25.541060 | orchestrator | 2026-01-09 00:51:25.541064 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 00:51:25.541068 | orchestrator | Friday 09 January 2026 00:48:59 +0000 (0:00:00.392) 0:00:03.716 ******** 2026-01-09 00:51:25.541072 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-09 00:51:25.541076 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-09 00:51:25.541080 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-09 00:51:25.541084 | orchestrator | 2026-01-09 00:51:25.541088 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-09 00:51:25.541092 | orchestrator | 2026-01-09 00:51:25.541096 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-09 00:51:25.541100 | orchestrator | Friday 09 January 2026 00:49:00 +0000 (0:00:00.817) 0:00:04.534 ******** 2026-01-09 00:51:25.541104 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:51:25.541108 | orchestrator | 2026-01-09 00:51:25.541112 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-09 00:51:25.541115 | orchestrator | Friday 09 January 2026 00:49:02 +0000 (0:00:02.278) 0:00:06.813 ******** 2026-01-09 00:51:25.541119 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:51:25.541123 | orchestrator | 2026-01-09 00:51:25.541127 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-09 00:51:25.541130 | orchestrator | Friday 09 January 2026 00:49:05 +0000 (0:00:02.447) 0:00:09.260 ******** 2026-01-09 00:51:25.541134 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:51:25.541138 | orchestrator | 2026-01-09 00:51:25.541142 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-09 00:51:25.541146 | orchestrator | Friday 09 January 2026 00:49:05 +0000 (0:00:00.418) 0:00:09.678 ******** 2026-01-09 00:51:25.541150 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:51:25.541154 | orchestrator | 2026-01-09 00:51:25.541158 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-09 00:51:25.541161 | orchestrator | Friday 09 January 2026 00:49:06 +0000 (0:00:00.566) 0:00:10.245 ******** 2026-01-09 00:51:25.541165 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:51:25.541169 | orchestrator | 2026-01-09 00:51:25.541173 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-09 00:51:25.541177 | orchestrator | Friday 09 January 2026 00:49:06 +0000 (0:00:00.422) 0:00:10.667 ******** 2026-01-09 00:51:25.541181 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:51:25.541185 | orchestrator | 2026-01-09 00:51:25.541246 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-09 00:51:25.541250 | orchestrator | Friday 09 January 2026 00:49:07 +0000 (0:00:00.611) 0:00:11.279 ******** 2026-01-09 00:51:25.541254 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:51:25.541258 | orchestrator | 2026-01-09 00:51:25.541262 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-09 00:51:25.541271 | orchestrator | Friday 09 January 2026 00:49:08 +0000 (0:00:01.066) 0:00:12.346 ******** 2026-01-09 00:51:25.541275 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:51:25.541284 | orchestrator | 2026-01-09 00:51:25.541288 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-09 00:51:25.541292 | orchestrator | Friday 09 January 2026 00:49:09 +0000 (0:00:00.781) 0:00:13.127 ******** 2026-01-09 00:51:25.541295 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:51:25.541299 | orchestrator | 2026-01-09 00:51:25.541303 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-09 00:51:25.541309 | orchestrator | Friday 09 January 2026 00:49:09 +0000 (0:00:00.516) 0:00:13.643 ******** 2026-01-09 00:51:25.541315 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:51:25.541321 | orchestrator | 2026-01-09 00:51:25.541340 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-09 00:51:25.541346 | orchestrator | Friday 09 January 2026 00:49:09 +0000 (0:00:00.415) 0:00:14.059 ******** 2026-01-09 00:51:25.541358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:51:25.541368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:51:25.541378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:51:25.541392 | orchestrator | 2026-01-09 00:51:25.541396 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-09 00:51:25.541400 | orchestrator | Friday 09 January 2026 00:49:11 +0000 (0:00:01.257) 0:00:15.317 ******** 2026-01-09 00:51:25.541414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:51:25.541419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:51:25.541424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:51:25.541428 | orchestrator | 2026-01-09 00:51:25.541432 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-09 00:51:25.541436 | orchestrator | Friday 09 January 2026 00:49:14 +0000 (0:00:03.503) 0:00:18.820 ******** 2026-01-09 00:51:25.541440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-09 00:51:25.541444 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-09 00:51:25.541448 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-09 00:51:25.541464 | orchestrator | 2026-01-09 00:51:25.541467 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-09 00:51:25.541471 | orchestrator | Friday 09 January 2026 00:49:16 +0000 (0:00:02.125) 0:00:20.946 ******** 2026-01-09 00:51:25.541475 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-09 00:51:25.541479 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-09 00:51:25.541483 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-09 00:51:25.541487 | orchestrator | 2026-01-09 00:51:25.541491 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-09 00:51:25.541495 | orchestrator | Friday 09 January 2026 00:49:19 +0000 (0:00:03.019) 0:00:23.965 ******** 2026-01-09 00:51:25.541502 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-09 00:51:25.541507 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-09 00:51:25.541513 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-09 00:51:25.541522 | orchestrator | 2026-01-09 00:51:25.541533 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-09 00:51:25.541539 | orchestrator | Friday 09 January 2026 00:49:21 +0000 (0:00:01.497) 0:00:25.463 ******** 2026-01-09 00:51:25.541549 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-09 00:51:25.541556 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-09 00:51:25.541563 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-09 00:51:25.541569 | orchestrator | 2026-01-09 00:51:25.541576 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-09 00:51:25.541582 | orchestrator | Friday 09 January 2026 00:49:24 +0000 (0:00:02.909) 0:00:28.373 ******** 2026-01-09 00:51:25.541588 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-09 00:51:25.541594 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-09 00:51:25.541600 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-09 00:51:25.541607 | orchestrator | 2026-01-09 00:51:25.541614 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-09 00:51:25.541621 | orchestrator | Friday 09 January 2026 00:49:26 +0000 (0:00:02.125) 0:00:30.498 ******** 2026-01-09 00:51:25.541627 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-09 00:51:25.541634 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-09 00:51:25.541640 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-09 00:51:25.541645 | orchestrator | 2026-01-09 00:51:25.541649 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-09 00:51:25.541654 | orchestrator | Friday 09 January 2026 00:49:29 +0000 (0:00:02.588) 0:00:33.086 ******** 2026-01-09 00:51:25.541659 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:51:25.541664 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:51:25.541668 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:51:25.541673 | orchestrator | 2026-01-09 00:51:25.541677 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-09 00:51:25.541682 | orchestrator | Friday 09 January 2026 00:49:31 +0000 (0:00:02.021) 0:00:35.108 ******** 2026-01-09 00:51:25.541687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:51:25.541706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:51:25.541719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:51:25.541725 | orchestrator | 2026-01-09 00:51:25.541729 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-09 00:51:25.541734 | orchestrator | Friday 09 January 2026 00:49:33 +0000 (0:00:02.212) 0:00:37.321 ******** 2026-01-09 00:51:25.541739 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:51:25.541744 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:51:25.541748 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:51:25.541753 | orchestrator | 2026-01-09 00:51:25.541758 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-09 00:51:25.541764 | orchestrator | Friday 09 January 2026 00:49:34 +0000 (0:00:01.174) 0:00:38.495 ******** 2026-01-09 00:51:25.541772 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:51:25.541782 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:51:25.541787 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:51:25.541794 | orchestrator | 2026-01-09 00:51:25.541800 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-09 00:51:25.541812 | orchestrator | Friday 09 January 2026 00:49:42 +0000 (0:00:08.053) 0:00:46.549 ******** 2026-01-09 00:51:25.541819 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:51:25.541825 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:51:25.541831 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:51:25.541837 | orchestrator | 2026-01-09 00:51:25.541842 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-09 00:51:25.541849 | orchestrator | 2026-01-09 00:51:25.541854 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-09 00:51:25.541860 | orchestrator | Friday 09 January 2026 00:49:42 +0000 (0:00:00.321) 0:00:46.871 ******** 2026-01-09 00:51:25.541866 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:51:25.541873 | orchestrator | 2026-01-09 00:51:25.541879 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-09 00:51:25.541886 | orchestrator | Friday 09 January 2026 00:49:43 +0000 (0:00:00.576) 0:00:47.448 ******** 2026-01-09 00:51:25.541891 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:51:25.541897 | orchestrator | 2026-01-09 00:51:25.541903 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-09 00:51:25.541910 | orchestrator | Friday 09 January 2026 00:49:43 +0000 (0:00:00.264) 0:00:47.713 ******** 2026-01-09 00:51:25.541916 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:51:25.541923 | orchestrator | 2026-01-09 00:51:25.541929 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-09 00:51:25.541936 | orchestrator | Friday 09 January 2026 00:49:46 +0000 (0:00:02.375) 0:00:50.089 ******** 2026-01-09 00:51:25.541942 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:51:25.541949 | orchestrator | 2026-01-09 00:51:25.541956 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-09 00:51:25.541961 | orchestrator | 2026-01-09 00:51:25.541965 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-09 00:51:25.541969 | orchestrator | Friday 09 January 2026 00:50:39 +0000 (0:00:53.251) 0:01:43.340 ******** 2026-01-09 00:51:25.541973 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:51:25.541977 | orchestrator | 2026-01-09 00:51:25.541981 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-09 00:51:25.541985 | orchestrator | Friday 09 January 2026 00:50:39 +0000 (0:00:00.528) 0:01:43.869 ******** 2026-01-09 00:51:25.541989 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:51:25.541993 | orchestrator | 2026-01-09 00:51:25.541997 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-09 00:51:25.542001 | orchestrator | Friday 09 January 2026 00:50:40 +0000 (0:00:00.281) 0:01:44.151 ******** 2026-01-09 00:51:25.542005 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:51:25.542009 | orchestrator | 2026-01-09 00:51:25.542089 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-09 00:51:25.542095 | orchestrator | Friday 09 January 2026 00:50:42 +0000 (0:00:02.224) 0:01:46.375 ******** 2026-01-09 00:51:25.542099 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:51:25.542104 | orchestrator | 2026-01-09 00:51:25.542108 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-09 00:51:25.542111 | orchestrator | 2026-01-09 00:51:25.542115 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-09 00:51:25.542120 | orchestrator | Friday 09 January 2026 00:50:59 +0000 (0:00:17.516) 0:02:03.891 ******** 2026-01-09 00:51:25.542124 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:51:25.542128 | orchestrator | 2026-01-09 00:51:25.542136 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-09 00:51:25.542140 | orchestrator | Friday 09 January 2026 00:51:00 +0000 (0:00:00.683) 0:02:04.575 ******** 2026-01-09 00:51:25.542144 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:51:25.542149 | orchestrator | 2026-01-09 00:51:25.542153 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-09 00:51:25.542156 | orchestrator | Friday 09 January 2026 00:51:00 +0000 (0:00:00.421) 0:02:04.996 ******** 2026-01-09 00:51:25.542166 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:51:25.542170 | orchestrator | 2026-01-09 00:51:25.542174 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-09 00:51:25.542185 | orchestrator | Friday 09 January 2026 00:51:02 +0000 (0:00:01.945) 0:02:06.941 ******** 2026-01-09 00:51:25.542211 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:51:25.542216 | orchestrator | 2026-01-09 00:51:25.542220 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-09 00:51:25.542224 | orchestrator | 2026-01-09 00:51:25.542228 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-09 00:51:25.542232 | orchestrator | Friday 09 January 2026 00:51:20 +0000 (0:00:17.336) 0:02:24.278 ******** 2026-01-09 00:51:25.542236 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:51:25.542240 | orchestrator | 2026-01-09 00:51:25.542244 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-09 00:51:25.542248 | orchestrator | Friday 09 January 2026 00:51:20 +0000 (0:00:00.551) 0:02:24.830 ******** 2026-01-09 00:51:25.542252 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-09 00:51:25.542256 | orchestrator | enable_outward_rabbitmq_True 2026-01-09 00:51:25.542260 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-09 00:51:25.542264 | orchestrator | outward_rabbitmq_restart 2026-01-09 00:51:25.542268 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:51:25.542272 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:51:25.542275 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:51:25.542280 | orchestrator | 2026-01-09 00:51:25.542284 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-09 00:51:25.542287 | orchestrator | skipping: no hosts matched 2026-01-09 00:51:25.542291 | orchestrator | 2026-01-09 00:51:25.542295 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-09 00:51:25.542299 | orchestrator | skipping: no hosts matched 2026-01-09 00:51:25.542303 | orchestrator | 2026-01-09 00:51:25.542307 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-09 00:51:25.542311 | orchestrator | skipping: no hosts matched 2026-01-09 00:51:25.542315 | orchestrator | 2026-01-09 00:51:25.542320 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:51:25.542325 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-09 00:51:25.542332 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-09 00:51:25.542336 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:51:25.542340 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 00:51:25.542344 | orchestrator | 2026-01-09 00:51:25.542348 | orchestrator | 2026-01-09 00:51:25.542353 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:51:25.542357 | orchestrator | Friday 09 January 2026 00:51:23 +0000 (0:00:03.033) 0:02:27.863 ******** 2026-01-09 00:51:25.542360 | orchestrator | =============================================================================== 2026-01-09 00:51:25.542364 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 88.11s 2026-01-09 00:51:25.542368 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.05s 2026-01-09 00:51:25.542372 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.55s 2026-01-09 00:51:25.542376 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.50s 2026-01-09 00:51:25.542384 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.03s 2026-01-09 00:51:25.542388 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.02s 2026-01-09 00:51:25.542392 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.95s 2026-01-09 00:51:25.542396 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.91s 2026-01-09 00:51:25.542400 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.59s 2026-01-09 00:51:25.542403 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.45s 2026-01-09 00:51:25.542407 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.28s 2026-01-09 00:51:25.542411 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.21s 2026-01-09 00:51:25.542415 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.13s 2026-01-09 00:51:25.542419 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.13s 2026-01-09 00:51:25.542423 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.02s 2026-01-09 00:51:25.542426 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.79s 2026-01-09 00:51:25.542430 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.50s 2026-01-09 00:51:25.542438 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.26s 2026-01-09 00:51:25.542442 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.17s 2026-01-09 00:51:25.542445 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.07s 2026-01-09 00:51:25.542449 | orchestrator | 2026-01-09 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:28.589137 | orchestrator | 2026-01-09 00:51:28 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:28.591917 | orchestrator | 2026-01-09 00:51:28 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:28.592714 | orchestrator | 2026-01-09 00:51:28 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:28.592968 | orchestrator | 2026-01-09 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:31.657095 | orchestrator | 2026-01-09 00:51:31 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:31.659117 | orchestrator | 2026-01-09 00:51:31 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:31.661174 | orchestrator | 2026-01-09 00:51:31 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:31.661248 | orchestrator | 2026-01-09 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:34.698925 | orchestrator | 2026-01-09 00:51:34 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:34.699156 | orchestrator | 2026-01-09 00:51:34 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:34.699901 | orchestrator | 2026-01-09 00:51:34 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:34.699926 | orchestrator | 2026-01-09 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:37.738975 | orchestrator | 2026-01-09 00:51:37 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:37.740450 | orchestrator | 2026-01-09 00:51:37 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:37.742262 | orchestrator | 2026-01-09 00:51:37 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:37.742291 | orchestrator | 2026-01-09 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:40.810414 | orchestrator | 2026-01-09 00:51:40 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:40.811801 | orchestrator | 2026-01-09 00:51:40 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:40.813696 | orchestrator | 2026-01-09 00:51:40 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:40.813749 | orchestrator | 2026-01-09 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:43.855668 | orchestrator | 2026-01-09 00:51:43 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:43.860337 | orchestrator | 2026-01-09 00:51:43 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:43.861924 | orchestrator | 2026-01-09 00:51:43 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:43.862084 | orchestrator | 2026-01-09 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:46.895027 | orchestrator | 2026-01-09 00:51:46 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:46.896429 | orchestrator | 2026-01-09 00:51:46 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:46.896990 | orchestrator | 2026-01-09 00:51:46 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:46.897148 | orchestrator | 2026-01-09 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:49.936294 | orchestrator | 2026-01-09 00:51:49 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:49.936617 | orchestrator | 2026-01-09 00:51:49 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:49.937575 | orchestrator | 2026-01-09 00:51:49 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:49.937622 | orchestrator | 2026-01-09 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:52.978827 | orchestrator | 2026-01-09 00:51:52 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:52.979042 | orchestrator | 2026-01-09 00:51:52 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:52.980043 | orchestrator | 2026-01-09 00:51:52 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:52.980088 | orchestrator | 2026-01-09 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:56.025488 | orchestrator | 2026-01-09 00:51:56 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:56.025610 | orchestrator | 2026-01-09 00:51:56 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:56.026434 | orchestrator | 2026-01-09 00:51:56 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:56.026577 | orchestrator | 2026-01-09 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:51:59.074285 | orchestrator | 2026-01-09 00:51:59 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:51:59.076280 | orchestrator | 2026-01-09 00:51:59 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:51:59.078536 | orchestrator | 2026-01-09 00:51:59 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:51:59.078619 | orchestrator | 2026-01-09 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:02.133759 | orchestrator | 2026-01-09 00:52:02 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:02.136094 | orchestrator | 2026-01-09 00:52:02 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:52:02.137957 | orchestrator | 2026-01-09 00:52:02 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:02.138062 | orchestrator | 2026-01-09 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:05.178113 | orchestrator | 2026-01-09 00:52:05 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:05.178976 | orchestrator | 2026-01-09 00:52:05 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:52:05.180721 | orchestrator | 2026-01-09 00:52:05 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:05.181071 | orchestrator | 2026-01-09 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:08.217309 | orchestrator | 2026-01-09 00:52:08 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:08.217421 | orchestrator | 2026-01-09 00:52:08 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:52:08.218367 | orchestrator | 2026-01-09 00:52:08 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:08.218414 | orchestrator | 2026-01-09 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:11.258311 | orchestrator | 2026-01-09 00:52:11 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:11.259435 | orchestrator | 2026-01-09 00:52:11 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:52:11.260165 | orchestrator | 2026-01-09 00:52:11 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:11.260198 | orchestrator | 2026-01-09 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:14.305058 | orchestrator | 2026-01-09 00:52:14 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:14.306811 | orchestrator | 2026-01-09 00:52:14 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state STARTED 2026-01-09 00:52:14.307892 | orchestrator | 2026-01-09 00:52:14 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:14.308008 | orchestrator | 2026-01-09 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:17.348796 | orchestrator | 2026-01-09 00:52:17 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:17.352779 | orchestrator | 2026-01-09 00:52:17 | INFO  | Task ea5c8411-7d01-4d44-8c33-c5d5970f142f is in state SUCCESS 2026-01-09 00:52:17.354916 | orchestrator | 2026-01-09 00:52:17.354991 | orchestrator | 2026-01-09 00:52:17.355005 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 00:52:17.355073 | orchestrator | 2026-01-09 00:52:17.355347 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 00:52:17.355367 | orchestrator | Friday 09 January 2026 00:49:43 +0000 (0:00:00.175) 0:00:00.175 ******** 2026-01-09 00:52:17.355377 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:52:17.355388 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:52:17.355397 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:52:17.355406 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.355432 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.355448 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.355469 | orchestrator | 2026-01-09 00:52:17.355486 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 00:52:17.355501 | orchestrator | Friday 09 January 2026 00:49:44 +0000 (0:00:00.769) 0:00:00.944 ******** 2026-01-09 00:52:17.355584 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-09 00:52:17.355605 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-09 00:52:17.355622 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-09 00:52:17.355633 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-09 00:52:17.355643 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-09 00:52:17.355653 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-09 00:52:17.355663 | orchestrator | 2026-01-09 00:52:17.355674 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-09 00:52:17.355685 | orchestrator | 2026-01-09 00:52:17.355695 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-09 00:52:17.355706 | orchestrator | Friday 09 January 2026 00:49:45 +0000 (0:00:00.839) 0:00:01.784 ******** 2026-01-09 00:52:17.355717 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:52:17.355730 | orchestrator | 2026-01-09 00:52:17.355740 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-09 00:52:17.355750 | orchestrator | Friday 09 January 2026 00:49:47 +0000 (0:00:01.694) 0:00:03.478 ******** 2026-01-09 00:52:17.355764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355839 | orchestrator | 2026-01-09 00:52:17.355869 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-09 00:52:17.355879 | orchestrator | Friday 09 January 2026 00:49:48 +0000 (0:00:01.714) 0:00:05.193 ******** 2026-01-09 00:52:17.355895 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355914 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355949 | orchestrator | 2026-01-09 00:52:17.355958 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-09 00:52:17.355967 | orchestrator | Friday 09 January 2026 00:49:50 +0000 (0:00:01.731) 0:00:06.924 ******** 2026-01-09 00:52:17.355976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.355985 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356047 | orchestrator | 2026-01-09 00:52:17.356056 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-09 00:52:17.356064 | orchestrator | Friday 09 January 2026 00:49:52 +0000 (0:00:01.791) 0:00:08.715 ******** 2026-01-09 00:52:17.356073 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356082 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356162 | orchestrator | 2026-01-09 00:52:17.356176 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-09 00:52:17.356186 | orchestrator | Friday 09 January 2026 00:49:54 +0000 (0:00:01.895) 0:00:10.611 ******** 2026-01-09 00:52:17.356199 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356217 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.356253 | orchestrator | 2026-01-09 00:52:17.356262 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-09 00:52:17.356271 | orchestrator | Friday 09 January 2026 00:49:55 +0000 (0:00:01.570) 0:00:12.181 ******** 2026-01-09 00:52:17.356280 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:52:17.356295 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:52:17.356304 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:52:17.356312 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:52:17.356321 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:52:17.356330 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:52:17.356338 | orchestrator | 2026-01-09 00:52:17.356347 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-09 00:52:17.356356 | orchestrator | Friday 09 January 2026 00:49:58 +0000 (0:00:02.422) 0:00:14.604 ******** 2026-01-09 00:52:17.356365 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-09 00:52:17.356374 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-09 00:52:17.356383 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-09 00:52:17.356391 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-09 00:52:17.356399 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-09 00:52:17.356408 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-09 00:52:17.356417 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-09 00:52:17.356425 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-09 00:52:17.356439 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-09 00:52:17.356448 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-09 00:52:17.356457 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-09 00:52:17.356466 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-09 00:52:17.356478 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-09 00:52:17.356490 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-09 00:52:17.356499 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-09 00:52:17.356507 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-09 00:52:17.356516 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-09 00:52:17.356525 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-09 00:52:17.356534 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-09 00:52:17.356543 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-09 00:52:17.356552 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-09 00:52:17.356561 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-09 00:52:17.356569 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-09 00:52:17.356578 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-09 00:52:17.356587 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-09 00:52:17.356609 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-09 00:52:17.356618 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-09 00:52:17.356627 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-09 00:52:17.356635 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-09 00:52:17.356644 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-09 00:52:17.356655 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-09 00:52:17.356669 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-09 00:52:17.356683 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-09 00:52:17.356697 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-09 00:52:17.356710 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-09 00:52:17.356724 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-09 00:52:17.356738 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-09 00:52:17.356747 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-09 00:52:17.356756 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-09 00:52:17.356765 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-09 00:52:17.356773 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-09 00:52:17.356782 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-09 00:52:17.356792 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-09 00:52:17.356801 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-09 00:52:17.356815 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-09 00:52:17.356824 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-09 00:52:17.356833 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-09 00:52:17.356847 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-09 00:52:17.356856 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-09 00:52:17.356865 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-09 00:52:17.356874 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-09 00:52:17.356882 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-09 00:52:17.356891 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-09 00:52:17.356900 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-09 00:52:17.356915 | orchestrator | 2026-01-09 00:52:17.356924 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-09 00:52:17.356933 | orchestrator | Friday 09 January 2026 00:50:19 +0000 (0:00:20.955) 0:00:35.559 ******** 2026-01-09 00:52:17.356942 | orchestrator | 2026-01-09 00:52:17.356950 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-09 00:52:17.356959 | orchestrator | Friday 09 January 2026 00:50:19 +0000 (0:00:00.069) 0:00:35.629 ******** 2026-01-09 00:52:17.356968 | orchestrator | 2026-01-09 00:52:17.356976 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-09 00:52:17.356985 | orchestrator | Friday 09 January 2026 00:50:19 +0000 (0:00:00.063) 0:00:35.692 ******** 2026-01-09 00:52:17.356994 | orchestrator | 2026-01-09 00:52:17.357002 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-09 00:52:17.357011 | orchestrator | Friday 09 January 2026 00:50:19 +0000 (0:00:00.071) 0:00:35.764 ******** 2026-01-09 00:52:17.357020 | orchestrator | 2026-01-09 00:52:17.357028 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-09 00:52:17.357037 | orchestrator | Friday 09 January 2026 00:50:19 +0000 (0:00:00.067) 0:00:35.831 ******** 2026-01-09 00:52:17.357045 | orchestrator | 2026-01-09 00:52:17.357054 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-09 00:52:17.357063 | orchestrator | Friday 09 January 2026 00:50:19 +0000 (0:00:00.070) 0:00:35.901 ******** 2026-01-09 00:52:17.357072 | orchestrator | 2026-01-09 00:52:17.357081 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-09 00:52:17.357089 | orchestrator | Friday 09 January 2026 00:50:19 +0000 (0:00:00.075) 0:00:35.976 ******** 2026-01-09 00:52:17.357126 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:52:17.357141 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.357155 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.357171 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:52:17.357186 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.357200 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:52:17.357211 | orchestrator | 2026-01-09 00:52:17.357221 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-09 00:52:17.357230 | orchestrator | Friday 09 January 2026 00:50:22 +0000 (0:00:02.459) 0:00:38.436 ******** 2026-01-09 00:52:17.357239 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:52:17.357248 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:52:17.357257 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:52:17.357266 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:52:17.357275 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:52:17.357283 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:52:17.357292 | orchestrator | 2026-01-09 00:52:17.357301 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-09 00:52:17.357310 | orchestrator | 2026-01-09 00:52:17.357319 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-09 00:52:17.357328 | orchestrator | Friday 09 January 2026 00:50:50 +0000 (0:00:28.324) 0:01:06.761 ******** 2026-01-09 00:52:17.357337 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:52:17.357346 | orchestrator | 2026-01-09 00:52:17.357354 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-09 00:52:17.357363 | orchestrator | Friday 09 January 2026 00:50:52 +0000 (0:00:01.687) 0:01:08.448 ******** 2026-01-09 00:52:17.357372 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:52:17.357381 | orchestrator | 2026-01-09 00:52:17.357390 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-09 00:52:17.357398 | orchestrator | Friday 09 January 2026 00:50:53 +0000 (0:00:00.887) 0:01:09.336 ******** 2026-01-09 00:52:17.357414 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.357423 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.357432 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.357440 | orchestrator | 2026-01-09 00:52:17.357449 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-09 00:52:17.357458 | orchestrator | Friday 09 January 2026 00:50:54 +0000 (0:00:01.070) 0:01:10.406 ******** 2026-01-09 00:52:17.357467 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.357476 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.357485 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.357499 | orchestrator | 2026-01-09 00:52:17.357508 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-09 00:52:17.357517 | orchestrator | Friday 09 January 2026 00:50:54 +0000 (0:00:00.421) 0:01:10.828 ******** 2026-01-09 00:52:17.357526 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.357535 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.357544 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.357552 | orchestrator | 2026-01-09 00:52:17.357561 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-09 00:52:17.357576 | orchestrator | Friday 09 January 2026 00:50:55 +0000 (0:00:00.516) 0:01:11.346 ******** 2026-01-09 00:52:17.357585 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.357595 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.357603 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.357612 | orchestrator | 2026-01-09 00:52:17.357621 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-09 00:52:17.357629 | orchestrator | Friday 09 January 2026 00:50:55 +0000 (0:00:00.570) 0:01:11.917 ******** 2026-01-09 00:52:17.357638 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.357647 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.357656 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.357665 | orchestrator | 2026-01-09 00:52:17.357673 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-09 00:52:17.357682 | orchestrator | Friday 09 January 2026 00:50:56 +0000 (0:00:01.122) 0:01:13.039 ******** 2026-01-09 00:52:17.357691 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.357700 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.357709 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.357717 | orchestrator | 2026-01-09 00:52:17.357727 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-09 00:52:17.357737 | orchestrator | Friday 09 January 2026 00:50:57 +0000 (0:00:00.531) 0:01:13.571 ******** 2026-01-09 00:52:17.357749 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.357760 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.357770 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.357781 | orchestrator | 2026-01-09 00:52:17.357792 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-09 00:52:17.357803 | orchestrator | Friday 09 January 2026 00:50:57 +0000 (0:00:00.330) 0:01:13.901 ******** 2026-01-09 00:52:17.357814 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.357825 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.357836 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.357846 | orchestrator | 2026-01-09 00:52:17.357857 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-09 00:52:17.357868 | orchestrator | Friday 09 January 2026 00:50:57 +0000 (0:00:00.367) 0:01:14.269 ******** 2026-01-09 00:52:17.357879 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.357889 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.357900 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.357911 | orchestrator | 2026-01-09 00:52:17.357922 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-09 00:52:17.357933 | orchestrator | Friday 09 January 2026 00:50:58 +0000 (0:00:00.497) 0:01:14.766 ******** 2026-01-09 00:52:17.357943 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.357954 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.357972 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.357983 | orchestrator | 2026-01-09 00:52:17.357994 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-09 00:52:17.358005 | orchestrator | Friday 09 January 2026 00:50:58 +0000 (0:00:00.291) 0:01:15.058 ******** 2026-01-09 00:52:17.358084 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.358181 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.358196 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.358207 | orchestrator | 2026-01-09 00:52:17.358219 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-09 00:52:17.358230 | orchestrator | Friday 09 January 2026 00:50:59 +0000 (0:00:00.377) 0:01:15.435 ******** 2026-01-09 00:52:17.358241 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.358252 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.358262 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.358273 | orchestrator | 2026-01-09 00:52:17.358284 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-09 00:52:17.358295 | orchestrator | Friday 09 January 2026 00:50:59 +0000 (0:00:00.411) 0:01:15.849 ******** 2026-01-09 00:52:17.358306 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.358316 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.358327 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.358338 | orchestrator | 2026-01-09 00:52:17.358348 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-09 00:52:17.358359 | orchestrator | Friday 09 January 2026 00:51:00 +0000 (0:00:00.562) 0:01:16.412 ******** 2026-01-09 00:52:17.358370 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.358381 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.358391 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.358402 | orchestrator | 2026-01-09 00:52:17.358413 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-09 00:52:17.358424 | orchestrator | Friday 09 January 2026 00:51:00 +0000 (0:00:00.401) 0:01:16.813 ******** 2026-01-09 00:52:17.358434 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.358445 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.358456 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.358467 | orchestrator | 2026-01-09 00:52:17.358477 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-09 00:52:17.358488 | orchestrator | Friday 09 January 2026 00:51:00 +0000 (0:00:00.293) 0:01:17.106 ******** 2026-01-09 00:52:17.358499 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.358510 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.358521 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.358532 | orchestrator | 2026-01-09 00:52:17.358543 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-09 00:52:17.358554 | orchestrator | Friday 09 January 2026 00:51:01 +0000 (0:00:00.265) 0:01:17.371 ******** 2026-01-09 00:52:17.358565 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.358575 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.358594 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.358606 | orchestrator | 2026-01-09 00:52:17.358617 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-09 00:52:17.358628 | orchestrator | Friday 09 January 2026 00:51:01 +0000 (0:00:00.247) 0:01:17.619 ******** 2026-01-09 00:52:17.358639 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:52:17.358650 | orchestrator | 2026-01-09 00:52:17.358661 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-09 00:52:17.358679 | orchestrator | Friday 09 January 2026 00:51:02 +0000 (0:00:00.682) 0:01:18.301 ******** 2026-01-09 00:52:17.358690 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.358701 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.358713 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.358724 | orchestrator | 2026-01-09 00:52:17.358743 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-09 00:52:17.358754 | orchestrator | Friday 09 January 2026 00:51:02 +0000 (0:00:00.393) 0:01:18.695 ******** 2026-01-09 00:52:17.358765 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.358776 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.358787 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.358798 | orchestrator | 2026-01-09 00:52:17.358809 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-09 00:52:17.358820 | orchestrator | Friday 09 January 2026 00:51:02 +0000 (0:00:00.445) 0:01:19.140 ******** 2026-01-09 00:52:17.358831 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.358842 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.358853 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.358864 | orchestrator | 2026-01-09 00:52:17.358874 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-09 00:52:17.358886 | orchestrator | Friday 09 January 2026 00:51:03 +0000 (0:00:00.520) 0:01:19.661 ******** 2026-01-09 00:52:17.358897 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.358907 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.358918 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.358929 | orchestrator | 2026-01-09 00:52:17.358940 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-09 00:52:17.358951 | orchestrator | Friday 09 January 2026 00:51:03 +0000 (0:00:00.265) 0:01:19.926 ******** 2026-01-09 00:52:17.358962 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.358973 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.358984 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.359073 | orchestrator | 2026-01-09 00:52:17.359117 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-09 00:52:17.359137 | orchestrator | Friday 09 January 2026 00:51:04 +0000 (0:00:00.353) 0:01:20.280 ******** 2026-01-09 00:52:17.359154 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.359173 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.359190 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.359209 | orchestrator | 2026-01-09 00:52:17.359226 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-09 00:52:17.359244 | orchestrator | Friday 09 January 2026 00:51:04 +0000 (0:00:00.337) 0:01:20.617 ******** 2026-01-09 00:52:17.359264 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.359283 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.359301 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.359319 | orchestrator | 2026-01-09 00:52:17.359338 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-09 00:52:17.359358 | orchestrator | Friday 09 January 2026 00:51:04 +0000 (0:00:00.589) 0:01:21.206 ******** 2026-01-09 00:52:17.359377 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.359396 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.359414 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.359434 | orchestrator | 2026-01-09 00:52:17.359453 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-09 00:52:17.359471 | orchestrator | Friday 09 January 2026 00:51:05 +0000 (0:00:00.354) 0:01:21.561 ******** 2026-01-09 00:52:17.359493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359675 | orchestrator | 2026-01-09 00:52:17.359686 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-09 00:52:17.359698 | orchestrator | Friday 09 January 2026 00:51:06 +0000 (0:00:01.570) 0:01:23.131 ******** 2026-01-09 00:52:17.359709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359856 | orchestrator | 2026-01-09 00:52:17.359874 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-09 00:52:17.359894 | orchestrator | Friday 09 January 2026 00:51:11 +0000 (0:00:04.289) 0:01:27.421 ******** 2026-01-09 00:52:17.359914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.359992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.360012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.360030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.360042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.360054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.360065 | orchestrator | 2026-01-09 00:52:17.360076 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-09 00:52:17.360088 | orchestrator | Friday 09 January 2026 00:51:13 +0000 (0:00:02.561) 0:01:29.983 ******** 2026-01-09 00:52:17.360131 | orchestrator | 2026-01-09 00:52:17.360143 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-09 00:52:17.360154 | orchestrator | Friday 09 January 2026 00:51:13 +0000 (0:00:00.068) 0:01:30.051 ******** 2026-01-09 00:52:17.360165 | orchestrator | 2026-01-09 00:52:17.360176 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-09 00:52:17.360187 | orchestrator | Friday 09 January 2026 00:51:13 +0000 (0:00:00.064) 0:01:30.116 ******** 2026-01-09 00:52:17.360198 | orchestrator | 2026-01-09 00:52:17.360209 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-09 00:52:17.360220 | orchestrator | Friday 09 January 2026 00:51:13 +0000 (0:00:00.068) 0:01:30.184 ******** 2026-01-09 00:52:17.360232 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:52:17.360243 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:52:17.360270 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:52:17.360289 | orchestrator | 2026-01-09 00:52:17.360308 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-09 00:52:17.360327 | orchestrator | Friday 09 January 2026 00:51:21 +0000 (0:00:07.559) 0:01:37.744 ******** 2026-01-09 00:52:17.360346 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:52:17.360365 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:52:17.360384 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:52:17.360424 | orchestrator | 2026-01-09 00:52:17.360437 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-09 00:52:17.360449 | orchestrator | Friday 09 January 2026 00:51:25 +0000 (0:00:03.772) 0:01:41.517 ******** 2026-01-09 00:52:17.360460 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:52:17.360471 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:52:17.360483 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:52:17.360494 | orchestrator | 2026-01-09 00:52:17.360505 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-09 00:52:17.360516 | orchestrator | Friday 09 January 2026 00:51:33 +0000 (0:00:08.317) 0:01:49.834 ******** 2026-01-09 00:52:17.360544 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.360562 | orchestrator | 2026-01-09 00:52:17.360581 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-09 00:52:17.360603 | orchestrator | Friday 09 January 2026 00:51:33 +0000 (0:00:00.302) 0:01:50.136 ******** 2026-01-09 00:52:17.360623 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.360642 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.360657 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.360682 | orchestrator | 2026-01-09 00:52:17.360693 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-09 00:52:17.360705 | orchestrator | Friday 09 January 2026 00:51:34 +0000 (0:00:00.904) 0:01:51.041 ******** 2026-01-09 00:52:17.360716 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.360727 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.360738 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:52:17.360749 | orchestrator | 2026-01-09 00:52:17.360760 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-09 00:52:17.360783 | orchestrator | Friday 09 January 2026 00:51:35 +0000 (0:00:00.649) 0:01:51.690 ******** 2026-01-09 00:52:17.360794 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.360805 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.360816 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.360827 | orchestrator | 2026-01-09 00:52:17.360839 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-09 00:52:17.360850 | orchestrator | Friday 09 January 2026 00:51:36 +0000 (0:00:00.784) 0:01:52.474 ******** 2026-01-09 00:52:17.360861 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.360872 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.360883 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:52:17.360894 | orchestrator | 2026-01-09 00:52:17.360906 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-09 00:52:17.360917 | orchestrator | Friday 09 January 2026 00:51:37 +0000 (0:00:00.840) 0:01:53.315 ******** 2026-01-09 00:52:17.360931 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.360949 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.360977 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.360997 | orchestrator | 2026-01-09 00:52:17.361015 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-09 00:52:17.361032 | orchestrator | Friday 09 January 2026 00:51:37 +0000 (0:00:00.883) 0:01:54.199 ******** 2026-01-09 00:52:17.361049 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.361069 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.361087 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.361135 | orchestrator | 2026-01-09 00:52:17.361155 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-09 00:52:17.361189 | orchestrator | Friday 09 January 2026 00:51:38 +0000 (0:00:00.946) 0:01:55.145 ******** 2026-01-09 00:52:17.361219 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.361239 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.361257 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.361275 | orchestrator | 2026-01-09 00:52:17.361294 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-09 00:52:17.361314 | orchestrator | Friday 09 January 2026 00:51:39 +0000 (0:00:00.320) 0:01:55.466 ******** 2026-01-09 00:52:17.361335 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361355 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361377 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361398 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361410 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361422 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361434 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361445 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361474 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361496 | orchestrator | 2026-01-09 00:52:17.361508 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-09 00:52:17.361519 | orchestrator | Friday 09 January 2026 00:51:41 +0000 (0:00:02.108) 0:01:57.575 ******** 2026-01-09 00:52:17.361536 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361547 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361559 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361570 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361604 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361645 | orchestrator | 2026-01-09 00:52:17.361657 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-09 00:52:17.361668 | orchestrator | Friday 09 January 2026 00:51:46 +0000 (0:00:05.040) 0:02:02.615 ******** 2026-01-09 00:52:17.361687 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361704 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361716 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361750 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361784 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 00:52:17.361803 | orchestrator | 2026-01-09 00:52:17.361830 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-09 00:52:17.361842 | orchestrator | Friday 09 January 2026 00:51:49 +0000 (0:00:03.176) 0:02:05.791 ******** 2026-01-09 00:52:17.361853 | orchestrator | 2026-01-09 00:52:17.361864 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-09 00:52:17.361876 | orchestrator | Friday 09 January 2026 00:51:49 +0000 (0:00:00.064) 0:02:05.856 ******** 2026-01-09 00:52:17.361886 | orchestrator | 2026-01-09 00:52:17.361897 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-09 00:52:17.361908 | orchestrator | Friday 09 January 2026 00:51:49 +0000 (0:00:00.066) 0:02:05.923 ******** 2026-01-09 00:52:17.361919 | orchestrator | 2026-01-09 00:52:17.361942 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-09 00:52:17.361954 | orchestrator | Friday 09 January 2026 00:51:49 +0000 (0:00:00.065) 0:02:05.989 ******** 2026-01-09 00:52:17.361965 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:52:17.361976 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:52:17.361987 | orchestrator | 2026-01-09 00:52:17.362005 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-09 00:52:17.362062 | orchestrator | Friday 09 January 2026 00:51:55 +0000 (0:00:06.244) 0:02:12.233 ******** 2026-01-09 00:52:17.362078 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:52:17.362089 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:52:17.362310 | orchestrator | 2026-01-09 00:52:17.362339 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-09 00:52:17.362351 | orchestrator | Friday 09 January 2026 00:52:02 +0000 (0:00:06.325) 0:02:18.559 ******** 2026-01-09 00:52:17.362362 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:52:17.362374 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:52:17.362385 | orchestrator | 2026-01-09 00:52:17.362396 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-09 00:52:17.362407 | orchestrator | Friday 09 January 2026 00:52:09 +0000 (0:00:06.718) 0:02:25.278 ******** 2026-01-09 00:52:17.362419 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:52:17.362430 | orchestrator | 2026-01-09 00:52:17.362441 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-09 00:52:17.362452 | orchestrator | Friday 09 January 2026 00:52:09 +0000 (0:00:00.162) 0:02:25.441 ******** 2026-01-09 00:52:17.362503 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.362516 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.362528 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.362539 | orchestrator | 2026-01-09 00:52:17.362550 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-09 00:52:17.362562 | orchestrator | Friday 09 January 2026 00:52:10 +0000 (0:00:00.978) 0:02:26.419 ******** 2026-01-09 00:52:17.362572 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.362584 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.362595 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:52:17.362606 | orchestrator | 2026-01-09 00:52:17.362618 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-09 00:52:17.362629 | orchestrator | Friday 09 January 2026 00:52:10 +0000 (0:00:00.734) 0:02:27.153 ******** 2026-01-09 00:52:17.362639 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.362647 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.362656 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.362663 | orchestrator | 2026-01-09 00:52:17.362671 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-09 00:52:17.362679 | orchestrator | Friday 09 January 2026 00:52:11 +0000 (0:00:00.900) 0:02:28.054 ******** 2026-01-09 00:52:17.362688 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:52:17.362695 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:52:17.362704 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:52:17.362712 | orchestrator | 2026-01-09 00:52:17.362720 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-09 00:52:17.362738 | orchestrator | Friday 09 January 2026 00:52:12 +0000 (0:00:00.846) 0:02:28.900 ******** 2026-01-09 00:52:17.362747 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.362755 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.362763 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.362771 | orchestrator | 2026-01-09 00:52:17.362779 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-09 00:52:17.362787 | orchestrator | Friday 09 January 2026 00:52:13 +0000 (0:00:00.889) 0:02:29.790 ******** 2026-01-09 00:52:17.362795 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:52:17.362803 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:52:17.362811 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:52:17.362820 | orchestrator | 2026-01-09 00:52:17.362828 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:52:17.362836 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-09 00:52:17.362845 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-09 00:52:17.362853 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-09 00:52:17.362861 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:52:17.362870 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:52:17.362878 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 00:52:17.362886 | orchestrator | 2026-01-09 00:52:17.362894 | orchestrator | 2026-01-09 00:52:17.362903 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:52:17.362911 | orchestrator | Friday 09 January 2026 00:52:14 +0000 (0:00:00.993) 0:02:30.784 ******** 2026-01-09 00:52:17.362919 | orchestrator | =============================================================================== 2026-01-09 00:52:17.362927 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 28.32s 2026-01-09 00:52:17.362935 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.96s 2026-01-09 00:52:17.362943 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.04s 2026-01-09 00:52:17.362951 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.80s 2026-01-09 00:52:17.362960 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 10.10s 2026-01-09 00:52:17.362967 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.04s 2026-01-09 00:52:17.362976 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.29s 2026-01-09 00:52:17.362995 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.18s 2026-01-09 00:52:17.363003 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.56s 2026-01-09 00:52:17.363012 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.46s 2026-01-09 00:52:17.363020 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.42s 2026-01-09 00:52:17.363028 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.11s 2026-01-09 00:52:17.363041 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.90s 2026-01-09 00:52:17.363049 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.79s 2026-01-09 00:52:17.363057 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.73s 2026-01-09 00:52:17.363065 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.71s 2026-01-09 00:52:17.363078 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.69s 2026-01-09 00:52:17.363086 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.69s 2026-01-09 00:52:17.363116 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.57s 2026-01-09 00:52:17.363125 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.57s 2026-01-09 00:52:17.363134 | orchestrator | 2026-01-09 00:52:17 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:17.363143 | orchestrator | 2026-01-09 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:20.396134 | orchestrator | 2026-01-09 00:52:20 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:20.398694 | orchestrator | 2026-01-09 00:52:20 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:20.398786 | orchestrator | 2026-01-09 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:23.445651 | orchestrator | 2026-01-09 00:52:23 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:23.446424 | orchestrator | 2026-01-09 00:52:23 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:23.446444 | orchestrator | 2026-01-09 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:26.492029 | orchestrator | 2026-01-09 00:52:26 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:26.495565 | orchestrator | 2026-01-09 00:52:26 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:26.495676 | orchestrator | 2026-01-09 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:29.555336 | orchestrator | 2026-01-09 00:52:29 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:29.556878 | orchestrator | 2026-01-09 00:52:29 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:29.557142 | orchestrator | 2026-01-09 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:32.606843 | orchestrator | 2026-01-09 00:52:32 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:32.607355 | orchestrator | 2026-01-09 00:52:32 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:32.607377 | orchestrator | 2026-01-09 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:35.653171 | orchestrator | 2026-01-09 00:52:35 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:35.654277 | orchestrator | 2026-01-09 00:52:35 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:35.654337 | orchestrator | 2026-01-09 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:38.707033 | orchestrator | 2026-01-09 00:52:38 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:38.709027 | orchestrator | 2026-01-09 00:52:38 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:38.709470 | orchestrator | 2026-01-09 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:41.759422 | orchestrator | 2026-01-09 00:52:41 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:41.761429 | orchestrator | 2026-01-09 00:52:41 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:41.761599 | orchestrator | 2026-01-09 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:44.803656 | orchestrator | 2026-01-09 00:52:44 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:44.804769 | orchestrator | 2026-01-09 00:52:44 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:44.805399 | orchestrator | 2026-01-09 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:47.855022 | orchestrator | 2026-01-09 00:52:47 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:47.857871 | orchestrator | 2026-01-09 00:52:47 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:47.857956 | orchestrator | 2026-01-09 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:50.910918 | orchestrator | 2026-01-09 00:52:50 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:50.914079 | orchestrator | 2026-01-09 00:52:50 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:50.914214 | orchestrator | 2026-01-09 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:53.968352 | orchestrator | 2026-01-09 00:52:53 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:53.970265 | orchestrator | 2026-01-09 00:52:53 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:53.970472 | orchestrator | 2026-01-09 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:52:57.005595 | orchestrator | 2026-01-09 00:52:57 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:52:57.006489 | orchestrator | 2026-01-09 00:52:57 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:52:57.006528 | orchestrator | 2026-01-09 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:00.064833 | orchestrator | 2026-01-09 00:53:00 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:00.067300 | orchestrator | 2026-01-09 00:53:00 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:00.067389 | orchestrator | 2026-01-09 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:03.117982 | orchestrator | 2026-01-09 00:53:03 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:03.120897 | orchestrator | 2026-01-09 00:53:03 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:03.121328 | orchestrator | 2026-01-09 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:06.193959 | orchestrator | 2026-01-09 00:53:06 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:06.196635 | orchestrator | 2026-01-09 00:53:06 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:06.196738 | orchestrator | 2026-01-09 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:09.239166 | orchestrator | 2026-01-09 00:53:09 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:09.240782 | orchestrator | 2026-01-09 00:53:09 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:09.240850 | orchestrator | 2026-01-09 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:12.326661 | orchestrator | 2026-01-09 00:53:12 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:12.327870 | orchestrator | 2026-01-09 00:53:12 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:12.328259 | orchestrator | 2026-01-09 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:15.374092 | orchestrator | 2026-01-09 00:53:15 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:15.376847 | orchestrator | 2026-01-09 00:53:15 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:15.377017 | orchestrator | 2026-01-09 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:18.419141 | orchestrator | 2026-01-09 00:53:18 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:18.419405 | orchestrator | 2026-01-09 00:53:18 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:18.419428 | orchestrator | 2026-01-09 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:21.463592 | orchestrator | 2026-01-09 00:53:21 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:21.463827 | orchestrator | 2026-01-09 00:53:21 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:21.463843 | orchestrator | 2026-01-09 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:24.499391 | orchestrator | 2026-01-09 00:53:24 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:24.500713 | orchestrator | 2026-01-09 00:53:24 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:24.500786 | orchestrator | 2026-01-09 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:27.552366 | orchestrator | 2026-01-09 00:53:27 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:27.553898 | orchestrator | 2026-01-09 00:53:27 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:27.553927 | orchestrator | 2026-01-09 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:30.606128 | orchestrator | 2026-01-09 00:53:30 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:30.610754 | orchestrator | 2026-01-09 00:53:30 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:30.610809 | orchestrator | 2026-01-09 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:33.656807 | orchestrator | 2026-01-09 00:53:33 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:33.659443 | orchestrator | 2026-01-09 00:53:33 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:33.661130 | orchestrator | 2026-01-09 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:36.702548 | orchestrator | 2026-01-09 00:53:36 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:36.705054 | orchestrator | 2026-01-09 00:53:36 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:36.705127 | orchestrator | 2026-01-09 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:39.756339 | orchestrator | 2026-01-09 00:53:39 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:39.757807 | orchestrator | 2026-01-09 00:53:39 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:39.758104 | orchestrator | 2026-01-09 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:42.795333 | orchestrator | 2026-01-09 00:53:42 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:42.796298 | orchestrator | 2026-01-09 00:53:42 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:42.796344 | orchestrator | 2026-01-09 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:45.844360 | orchestrator | 2026-01-09 00:53:45 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:45.846779 | orchestrator | 2026-01-09 00:53:45 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:45.846823 | orchestrator | 2026-01-09 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:48.892923 | orchestrator | 2026-01-09 00:53:48 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:48.894613 | orchestrator | 2026-01-09 00:53:48 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:48.894871 | orchestrator | 2026-01-09 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:51.943399 | orchestrator | 2026-01-09 00:53:51 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:51.945737 | orchestrator | 2026-01-09 00:53:51 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:51.945817 | orchestrator | 2026-01-09 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:54.989769 | orchestrator | 2026-01-09 00:53:54 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:54.991369 | orchestrator | 2026-01-09 00:53:54 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:54.991401 | orchestrator | 2026-01-09 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:53:58.041178 | orchestrator | 2026-01-09 00:53:58 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:53:58.045421 | orchestrator | 2026-01-09 00:53:58 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:53:58.045508 | orchestrator | 2026-01-09 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:01.091061 | orchestrator | 2026-01-09 00:54:01 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:01.096722 | orchestrator | 2026-01-09 00:54:01 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:01.096780 | orchestrator | 2026-01-09 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:04.137699 | orchestrator | 2026-01-09 00:54:04 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:04.138393 | orchestrator | 2026-01-09 00:54:04 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:04.138433 | orchestrator | 2026-01-09 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:07.189161 | orchestrator | 2026-01-09 00:54:07 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:07.191475 | orchestrator | 2026-01-09 00:54:07 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:07.191642 | orchestrator | 2026-01-09 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:10.234832 | orchestrator | 2026-01-09 00:54:10 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:10.243421 | orchestrator | 2026-01-09 00:54:10 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:10.243544 | orchestrator | 2026-01-09 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:13.301103 | orchestrator | 2026-01-09 00:54:13 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:13.302759 | orchestrator | 2026-01-09 00:54:13 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:13.303185 | orchestrator | 2026-01-09 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:16.348324 | orchestrator | 2026-01-09 00:54:16 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:16.348761 | orchestrator | 2026-01-09 00:54:16 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:16.348807 | orchestrator | 2026-01-09 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:19.403126 | orchestrator | 2026-01-09 00:54:19 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:19.404895 | orchestrator | 2026-01-09 00:54:19 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:19.404964 | orchestrator | 2026-01-09 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:22.446819 | orchestrator | 2026-01-09 00:54:22 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:22.447088 | orchestrator | 2026-01-09 00:54:22 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:22.447114 | orchestrator | 2026-01-09 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:25.487357 | orchestrator | 2026-01-09 00:54:25 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:25.490359 | orchestrator | 2026-01-09 00:54:25 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:25.490948 | orchestrator | 2026-01-09 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:28.535496 | orchestrator | 2026-01-09 00:54:28 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:28.540100 | orchestrator | 2026-01-09 00:54:28 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:28.540190 | orchestrator | 2026-01-09 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:31.586395 | orchestrator | 2026-01-09 00:54:31 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:31.588292 | orchestrator | 2026-01-09 00:54:31 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:31.588353 | orchestrator | 2026-01-09 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:34.625829 | orchestrator | 2026-01-09 00:54:34 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:34.627869 | orchestrator | 2026-01-09 00:54:34 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:34.627910 | orchestrator | 2026-01-09 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:37.665858 | orchestrator | 2026-01-09 00:54:37 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:37.667178 | orchestrator | 2026-01-09 00:54:37 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:37.667365 | orchestrator | 2026-01-09 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:40.706157 | orchestrator | 2026-01-09 00:54:40 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:40.706416 | orchestrator | 2026-01-09 00:54:40 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:40.706463 | orchestrator | 2026-01-09 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:43.759619 | orchestrator | 2026-01-09 00:54:43 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:43.760261 | orchestrator | 2026-01-09 00:54:43 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:43.760299 | orchestrator | 2026-01-09 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:46.810171 | orchestrator | 2026-01-09 00:54:46 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:46.812808 | orchestrator | 2026-01-09 00:54:46 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:46.812854 | orchestrator | 2026-01-09 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:49.858926 | orchestrator | 2026-01-09 00:54:49 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:49.860922 | orchestrator | 2026-01-09 00:54:49 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:49.860999 | orchestrator | 2026-01-09 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:52.905893 | orchestrator | 2026-01-09 00:54:52 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:52.908046 | orchestrator | 2026-01-09 00:54:52 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:52.908205 | orchestrator | 2026-01-09 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:55.952370 | orchestrator | 2026-01-09 00:54:55 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:55.956996 | orchestrator | 2026-01-09 00:54:55 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:55.957091 | orchestrator | 2026-01-09 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:54:59.014912 | orchestrator | 2026-01-09 00:54:59 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:54:59.016710 | orchestrator | 2026-01-09 00:54:59 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:54:59.016989 | orchestrator | 2026-01-09 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:02.069524 | orchestrator | 2026-01-09 00:55:02 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:55:02.071708 | orchestrator | 2026-01-09 00:55:02 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:02.071963 | orchestrator | 2026-01-09 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:05.123132 | orchestrator | 2026-01-09 00:55:05 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:55:05.124033 | orchestrator | 2026-01-09 00:55:05 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:05.124064 | orchestrator | 2026-01-09 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:08.186103 | orchestrator | 2026-01-09 00:55:08 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:55:08.186211 | orchestrator | 2026-01-09 00:55:08 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:08.186225 | orchestrator | 2026-01-09 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:11.228509 | orchestrator | 2026-01-09 00:55:11 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:55:11.230364 | orchestrator | 2026-01-09 00:55:11 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:11.230441 | orchestrator | 2026-01-09 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:14.267844 | orchestrator | 2026-01-09 00:55:14 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:55:14.269368 | orchestrator | 2026-01-09 00:55:14 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:14.269451 | orchestrator | 2026-01-09 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:17.315120 | orchestrator | 2026-01-09 00:55:17 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state STARTED 2026-01-09 00:55:17.316557 | orchestrator | 2026-01-09 00:55:17 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:17.316612 | orchestrator | 2026-01-09 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:20.355916 | orchestrator | 2026-01-09 00:55:20 | INFO  | Task fee975a5-1a56-4aa5-9f71-abb202d75750 is in state SUCCESS 2026-01-09 00:55:20.359595 | orchestrator | 2026-01-09 00:55:20.359681 | orchestrator | 2026-01-09 00:55:20.359695 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 00:55:20.359706 | orchestrator | 2026-01-09 00:55:20.359717 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 00:55:20.359728 | orchestrator | Friday 09 January 2026 00:48:27 +0000 (0:00:00.470) 0:00:00.470 ******** 2026-01-09 00:55:20.359739 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.359750 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.359760 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.359770 | orchestrator | 2026-01-09 00:55:20.359780 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 00:55:20.359791 | orchestrator | Friday 09 January 2026 00:48:27 +0000 (0:00:00.416) 0:00:00.886 ******** 2026-01-09 00:55:20.359801 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-09 00:55:20.359812 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-09 00:55:20.359822 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-09 00:55:20.359832 | orchestrator | 2026-01-09 00:55:20.359865 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-09 00:55:20.359876 | orchestrator | 2026-01-09 00:55:20.359886 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-09 00:55:20.359896 | orchestrator | Friday 09 January 2026 00:48:28 +0000 (0:00:01.193) 0:00:02.080 ******** 2026-01-09 00:55:20.359906 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.359917 | orchestrator | 2026-01-09 00:55:20.359927 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-09 00:55:20.359937 | orchestrator | Friday 09 January 2026 00:48:29 +0000 (0:00:00.786) 0:00:02.867 ******** 2026-01-09 00:55:20.359946 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.359956 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.359968 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.359978 | orchestrator | 2026-01-09 00:55:20.359988 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-09 00:55:20.359998 | orchestrator | Friday 09 January 2026 00:48:30 +0000 (0:00:00.878) 0:00:03.745 ******** 2026-01-09 00:55:20.360008 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.360018 | orchestrator | 2026-01-09 00:55:20.360028 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-09 00:55:20.360038 | orchestrator | Friday 09 January 2026 00:48:31 +0000 (0:00:01.268) 0:00:05.014 ******** 2026-01-09 00:55:20.360048 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.360058 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.360097 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.360107 | orchestrator | 2026-01-09 00:55:20.360117 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-09 00:55:20.360127 | orchestrator | Friday 09 January 2026 00:48:32 +0000 (0:00:00.815) 0:00:05.830 ******** 2026-01-09 00:55:20.360137 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-09 00:55:20.360148 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-09 00:55:20.360160 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-09 00:55:20.360171 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-09 00:55:20.360183 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-09 00:55:20.360194 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-09 00:55:20.360207 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-09 00:55:20.360219 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-09 00:55:20.360230 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-09 00:55:20.360242 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-09 00:55:20.360254 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-09 00:55:20.360265 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-09 00:55:20.360277 | orchestrator | 2026-01-09 00:55:20.360288 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-09 00:55:20.360300 | orchestrator | Friday 09 January 2026 00:48:36 +0000 (0:00:04.214) 0:00:10.045 ******** 2026-01-09 00:55:20.360312 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-09 00:55:20.360324 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-09 00:55:20.360334 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-09 00:55:20.360343 | orchestrator | 2026-01-09 00:55:20.360353 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-09 00:55:20.360365 | orchestrator | Friday 09 January 2026 00:48:37 +0000 (0:00:00.833) 0:00:10.878 ******** 2026-01-09 00:55:20.360381 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-09 00:55:20.360403 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-09 00:55:20.360443 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-09 00:55:20.360460 | orchestrator | 2026-01-09 00:55:20.360475 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-09 00:55:20.360491 | orchestrator | Friday 09 January 2026 00:48:38 +0000 (0:00:01.526) 0:00:12.405 ******** 2026-01-09 00:55:20.360506 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-09 00:55:20.360523 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.360561 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-09 00:55:20.360579 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.360595 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-09 00:55:20.360611 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.360623 | orchestrator | 2026-01-09 00:55:20.360633 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-09 00:55:20.360642 | orchestrator | Friday 09 January 2026 00:48:40 +0000 (0:00:01.339) 0:00:13.744 ******** 2026-01-09 00:55:20.360656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.360684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.360696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.360706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.360717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.360744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.360763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.360787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.360801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.360815 | orchestrator | 2026-01-09 00:55:20.360829 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-09 00:55:20.360898 | orchestrator | Friday 09 January 2026 00:48:42 +0000 (0:00:01.973) 0:00:15.718 ******** 2026-01-09 00:55:20.360914 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.360928 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.360942 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.360955 | orchestrator | 2026-01-09 00:55:20.360970 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-09 00:55:20.360983 | orchestrator | Friday 09 January 2026 00:48:43 +0000 (0:00:01.561) 0:00:17.279 ******** 2026-01-09 00:55:20.360997 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-09 00:55:20.361012 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-09 00:55:20.361027 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-09 00:55:20.361043 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-09 00:55:20.361059 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-09 00:55:20.361075 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-09 00:55:20.361092 | orchestrator | 2026-01-09 00:55:20.361106 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-09 00:55:20.361121 | orchestrator | Friday 09 January 2026 00:48:46 +0000 (0:00:02.605) 0:00:19.884 ******** 2026-01-09 00:55:20.361138 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.361155 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.361171 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.361187 | orchestrator | 2026-01-09 00:55:20.361204 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-09 00:55:20.361219 | orchestrator | Friday 09 January 2026 00:48:47 +0000 (0:00:01.464) 0:00:21.349 ******** 2026-01-09 00:55:20.361229 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.361240 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.361257 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.361273 | orchestrator | 2026-01-09 00:55:20.361288 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-09 00:55:20.361304 | orchestrator | Friday 09 January 2026 00:48:52 +0000 (0:00:05.094) 0:00:26.443 ******** 2026-01-09 00:55:20.361322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.361398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.361421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.361439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-09 00:55:20.361457 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.361472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.361490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.361509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.361535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-09 00:55:20.361565 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.361616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.361641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.361661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.361672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-09 00:55:20.361682 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.361692 | orchestrator | 2026-01-09 00:55:20.361701 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-09 00:55:20.361712 | orchestrator | Friday 09 January 2026 00:48:54 +0000 (0:00:01.640) 0:00:28.083 ******** 2026-01-09 00:55:20.361722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.361739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.361763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.361774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.361785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.361795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-09 00:55:20.361805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.361815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.361860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-09 00:55:20.361881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.361892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.361902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a', '__omit_place_holder__31b9c649c3228627977df054d7ca9eeaaf47e53a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-09 00:55:20.361912 | orchestrator | 2026-01-09 00:55:20.361922 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-09 00:55:20.361931 | orchestrator | Friday 09 January 2026 00:48:58 +0000 (0:00:03.385) 0:00:31.469 ******** 2026-01-09 00:55:20.361941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.361952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.361969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.361998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.362009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.362077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.362091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.362102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.362118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.362128 | orchestrator | 2026-01-09 00:55:20.362138 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-09 00:55:20.362148 | orchestrator | Friday 09 January 2026 00:49:02 +0000 (0:00:04.719) 0:00:36.189 ******** 2026-01-09 00:55:20.362159 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-09 00:55:20.362169 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-09 00:55:20.362179 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-09 00:55:20.362189 | orchestrator | 2026-01-09 00:55:20.362199 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-09 00:55:20.362208 | orchestrator | Friday 09 January 2026 00:49:05 +0000 (0:00:03.073) 0:00:39.263 ******** 2026-01-09 00:55:20.362223 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-09 00:55:20.362234 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-09 00:55:20.362244 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-09 00:55:20.362253 | orchestrator | 2026-01-09 00:55:20.362944 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-09 00:55:20.363055 | orchestrator | Friday 09 January 2026 00:49:08 +0000 (0:00:02.951) 0:00:42.214 ******** 2026-01-09 00:55:20.363067 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.363076 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.363082 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.363089 | orchestrator | 2026-01-09 00:55:20.363097 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-09 00:55:20.363104 | orchestrator | Friday 09 January 2026 00:49:09 +0000 (0:00:00.523) 0:00:42.738 ******** 2026-01-09 00:55:20.363112 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-09 00:55:20.363122 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-09 00:55:20.363129 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-09 00:55:20.363136 | orchestrator | 2026-01-09 00:55:20.363142 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-09 00:55:20.363148 | orchestrator | Friday 09 January 2026 00:49:12 +0000 (0:00:02.797) 0:00:45.538 ******** 2026-01-09 00:55:20.363156 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-09 00:55:20.363163 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-09 00:55:20.363169 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-09 00:55:20.363176 | orchestrator | 2026-01-09 00:55:20.363182 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-09 00:55:20.363189 | orchestrator | Friday 09 January 2026 00:49:16 +0000 (0:00:04.215) 0:00:49.753 ******** 2026-01-09 00:55:20.363225 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-09 00:55:20.363233 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-09 00:55:20.363239 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-09 00:55:20.363245 | orchestrator | 2026-01-09 00:55:20.363252 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-09 00:55:20.363258 | orchestrator | Friday 09 January 2026 00:49:18 +0000 (0:00:02.050) 0:00:51.803 ******** 2026-01-09 00:55:20.363264 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-09 00:55:20.363270 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-09 00:55:20.363276 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-09 00:55:20.363282 | orchestrator | 2026-01-09 00:55:20.363289 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-09 00:55:20.363295 | orchestrator | Friday 09 January 2026 00:49:20 +0000 (0:00:01.846) 0:00:53.649 ******** 2026-01-09 00:55:20.363301 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.363307 | orchestrator | 2026-01-09 00:55:20.363313 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-09 00:55:20.363319 | orchestrator | Friday 09 January 2026 00:49:21 +0000 (0:00:00.835) 0:00:54.485 ******** 2026-01-09 00:55:20.363329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.363340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.363371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.363378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.363389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.363396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.363403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.363411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.363417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.363423 | orchestrator | 2026-01-09 00:55:20.363430 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-09 00:55:20.363437 | orchestrator | Friday 09 January 2026 00:49:25 +0000 (0:00:04.076) 0:00:58.561 ******** 2026-01-09 00:55:20.363449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363473 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.363480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363498 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.363553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363584 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.363590 | orchestrator | 2026-01-09 00:55:20.363596 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-09 00:55:20.363602 | orchestrator | Friday 09 January 2026 00:49:26 +0000 (0:00:01.019) 0:00:59.581 ******** 2026-01-09 00:55:20.363609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363658 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.363663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363668 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.363672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363686 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.363691 | orchestrator | 2026-01-09 00:55:20.363698 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-09 00:55:20.363705 | orchestrator | Friday 09 January 2026 00:49:27 +0000 (0:00:01.750) 0:01:01.331 ******** 2026-01-09 00:55:20.363715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363753 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.363760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363793 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.363808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363815 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.363821 | orchestrator | 2026-01-09 00:55:20.363827 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-09 00:55:20.363833 | orchestrator | Friday 09 January 2026 00:49:31 +0000 (0:00:03.443) 0:01:04.775 ******** 2026-01-09 00:55:20.363867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363885 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.363892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363923 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.363935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363947 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.363951 | orchestrator | 2026-01-09 00:55:20.363955 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-09 00:55:20.363959 | orchestrator | Friday 09 January 2026 00:49:32 +0000 (0:00:01.395) 0:01:06.171 ******** 2026-01-09 00:55:20.363963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.363976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.363980 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.363992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.363996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.364000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.364004 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.364008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.364012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.364016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.364024 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.364028 | orchestrator | 2026-01-09 00:55:20.364032 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-09 00:55:20.364036 | orchestrator | Friday 09 January 2026 00:49:33 +0000 (0:00:01.077) 0:01:07.248 ******** 2026-01-09 00:55:20.364042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.364051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.364055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.364059 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.364063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.364067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.364071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.364079 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.364082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.364092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.364096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.364100 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.364104 | orchestrator | 2026-01-09 00:55:20.364108 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-09 00:55:20.364112 | orchestrator | Friday 09 January 2026 00:49:35 +0000 (0:00:01.338) 0:01:08.587 ******** 2026-01-09 00:55:20.364115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.364119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.364127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.364131 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.364135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.364144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.364153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.364157 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.364161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.364165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.364169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.364176 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.364180 | orchestrator | 2026-01-09 00:55:20.364184 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-09 00:55:20.364188 | orchestrator | Friday 09 January 2026 00:49:35 +0000 (0:00:00.714) 0:01:09.302 ******** 2026-01-09 00:55:20.364192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.364196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.364203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.364207 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.364215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.364219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.364223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-09 00:55:20.364230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.364234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-09 00:55:20.364238 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.364242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-09 00:55:20.364246 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.364249 | orchestrator | 2026-01-09 00:55:20.364253 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-09 00:55:20.364257 | orchestrator | Friday 09 January 2026 00:49:36 +0000 (0:00:00.810) 0:01:10.113 ******** 2026-01-09 00:55:20.364264 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-09 00:55:20.364270 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-09 00:55:20.364276 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-09 00:55:20.364280 | orchestrator | 2026-01-09 00:55:20.364284 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-09 00:55:20.364288 | orchestrator | Friday 09 January 2026 00:49:38 +0000 (0:00:01.929) 0:01:12.042 ******** 2026-01-09 00:55:20.364292 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-09 00:55:20.364296 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-09 00:55:20.364299 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-09 00:55:20.364303 | orchestrator | 2026-01-09 00:55:20.364307 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-09 00:55:20.364311 | orchestrator | Friday 09 January 2026 00:49:40 +0000 (0:00:01.921) 0:01:13.963 ******** 2026-01-09 00:55:20.364314 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-09 00:55:20.364318 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-09 00:55:20.364327 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-09 00:55:20.364330 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.364334 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-09 00:55:20.364338 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-09 00:55:20.364342 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.364346 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-09 00:55:20.364349 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.364353 | orchestrator | 2026-01-09 00:55:20.364357 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-09 00:55:20.364361 | orchestrator | Friday 09 January 2026 00:49:41 +0000 (0:00:01.113) 0:01:15.077 ******** 2026-01-09 00:55:20.364365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.364369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.364373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-09 00:55:20.364383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.364387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.364395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-09 00:55:20.364399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.364403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.364407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-09 00:55:20.364411 | orchestrator | 2026-01-09 00:55:20.364415 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-09 00:55:20.364419 | orchestrator | Friday 09 January 2026 00:49:44 +0000 (0:00:03.000) 0:01:18.077 ******** 2026-01-09 00:55:20.364423 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.364427 | orchestrator | 2026-01-09 00:55:20.364431 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-09 00:55:20.364435 | orchestrator | Friday 09 January 2026 00:49:45 +0000 (0:00:00.791) 0:01:18.868 ******** 2026-01-09 00:55:20.364442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-09 00:55:20.364451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-09 00:55:20.364459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.364463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.364467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-09 00:55:20.364821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.364825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364833 | orchestrator | 2026-01-09 00:55:20.364857 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-09 00:55:20.364864 | orchestrator | Friday 09 January 2026 00:49:49 +0000 (0:00:04.587) 0:01:23.455 ******** 2026-01-09 00:55:20.364870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-09 00:55:20.364891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.364895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364904 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.364908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-09 00:55:20.364911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.364915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364933 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.364939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-09 00:55:20.364943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.364947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.364955 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.364959 | orchestrator | 2026-01-09 00:55:20.364962 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-09 00:55:20.364966 | orchestrator | Friday 09 January 2026 00:49:51 +0000 (0:00:01.642) 0:01:25.098 ******** 2026-01-09 00:55:20.364971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-09 00:55:20.364976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-09 00:55:20.364982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-09 00:55:20.364990 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.364994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-09 00:55:20.364998 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.365004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-09 00:55:20.365008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-09 00:55:20.365012 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.365016 | orchestrator | 2026-01-09 00:55:20.365022 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-09 00:55:20.365026 | orchestrator | Friday 09 January 2026 00:49:52 +0000 (0:00:01.057) 0:01:26.156 ******** 2026-01-09 00:55:20.365029 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.365033 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.365037 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.365041 | orchestrator | 2026-01-09 00:55:20.365044 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-09 00:55:20.365048 | orchestrator | Friday 09 January 2026 00:49:54 +0000 (0:00:01.525) 0:01:27.681 ******** 2026-01-09 00:55:20.365052 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.365056 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.365059 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.365063 | orchestrator | 2026-01-09 00:55:20.365067 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-09 00:55:20.365071 | orchestrator | Friday 09 January 2026 00:49:56 +0000 (0:00:02.152) 0:01:29.833 ******** 2026-01-09 00:55:20.365074 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.365078 | orchestrator | 2026-01-09 00:55:20.365082 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-09 00:55:20.365086 | orchestrator | Friday 09 January 2026 00:49:57 +0000 (0:00:00.837) 0:01:30.670 ******** 2026-01-09 00:55:20.365090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.365095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.365116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.365128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365140 | orchestrator | 2026-01-09 00:55:20.365144 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-09 00:55:20.365148 | orchestrator | Friday 09 January 2026 00:50:02 +0000 (0:00:05.345) 0:01:36.016 ******** 2026-01-09 00:55:20.365157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.365163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365179 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.365188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.365199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365211 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.365247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.365254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365273 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.365279 | orchestrator | 2026-01-09 00:55:20.365285 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-09 00:55:20.365291 | orchestrator | Friday 09 January 2026 00:50:03 +0000 (0:00:00.566) 0:01:36.583 ******** 2026-01-09 00:55:20.365298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-09 00:55:20.365306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-09 00:55:20.365312 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.365318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-09 00:55:20.365324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-09 00:55:20.365331 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.365337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-09 00:55:20.365342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-09 00:55:20.365348 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.365354 | orchestrator | 2026-01-09 00:55:20.365360 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-09 00:55:20.365366 | orchestrator | Friday 09 January 2026 00:50:04 +0000 (0:00:00.967) 0:01:37.550 ******** 2026-01-09 00:55:20.365372 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.365378 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.365388 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.365394 | orchestrator | 2026-01-09 00:55:20.365400 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-09 00:55:20.365406 | orchestrator | Friday 09 January 2026 00:50:05 +0000 (0:00:01.382) 0:01:38.933 ******** 2026-01-09 00:55:20.365412 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.365418 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.365424 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.365430 | orchestrator | 2026-01-09 00:55:20.365439 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-09 00:55:20.365445 | orchestrator | Friday 09 January 2026 00:50:07 +0000 (0:00:01.939) 0:01:40.872 ******** 2026-01-09 00:55:20.365451 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.365457 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.365463 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.365469 | orchestrator | 2026-01-09 00:55:20.365475 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-09 00:55:20.365481 | orchestrator | Friday 09 January 2026 00:50:07 +0000 (0:00:00.344) 0:01:41.217 ******** 2026-01-09 00:55:20.365487 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.365493 | orchestrator | 2026-01-09 00:55:20.365499 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-09 00:55:20.365505 | orchestrator | Friday 09 January 2026 00:50:08 +0000 (0:00:00.920) 0:01:42.137 ******** 2026-01-09 00:55:20.365516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-09 00:55:20.365523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-09 00:55:20.365530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-09 00:55:20.365537 | orchestrator | 2026-01-09 00:55:20.365542 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-09 00:55:20.365548 | orchestrator | Friday 09 January 2026 00:50:11 +0000 (0:00:02.723) 0:01:44.860 ******** 2026-01-09 00:55:20.365562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-09 00:55:20.365568 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.365574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-09 00:55:20.365585 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.365591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-09 00:55:20.365597 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.365603 | orchestrator | 2026-01-09 00:55:20.365609 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-09 00:55:20.365615 | orchestrator | Friday 09 January 2026 00:50:13 +0000 (0:00:01.710) 0:01:46.571 ******** 2026-01-09 00:55:20.365622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-09 00:55:20.365630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-09 00:55:20.365638 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.365644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-09 00:55:20.365650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-09 00:55:20.365656 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.365668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-09 00:55:20.365681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-09 00:55:20.365687 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.365693 | orchestrator | 2026-01-09 00:55:20.365699 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-09 00:55:20.365705 | orchestrator | Friday 09 January 2026 00:50:15 +0000 (0:00:02.159) 0:01:48.730 ******** 2026-01-09 00:55:20.365711 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.365718 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.365724 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.365730 | orchestrator | 2026-01-09 00:55:20.365736 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-09 00:55:20.365743 | orchestrator | Friday 09 January 2026 00:50:16 +0000 (0:00:00.891) 0:01:49.621 ******** 2026-01-09 00:55:20.365749 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.365755 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.365761 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.365766 | orchestrator | 2026-01-09 00:55:20.365772 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-09 00:55:20.365778 | orchestrator | Friday 09 January 2026 00:50:17 +0000 (0:00:01.461) 0:01:51.082 ******** 2026-01-09 00:55:20.365784 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.365790 | orchestrator | 2026-01-09 00:55:20.365796 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-09 00:55:20.365802 | orchestrator | Friday 09 January 2026 00:50:18 +0000 (0:00:00.840) 0:01:51.923 ******** 2026-01-09 00:55:20.365809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.365816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.365901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.365943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365960 | orchestrator | 2026-01-09 00:55:20.365964 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-09 00:55:20.365968 | orchestrator | Friday 09 January 2026 00:50:24 +0000 (0:00:05.851) 0:01:57.774 ******** 2026-01-09 00:55:20.365972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.365980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.365998 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.366002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.366006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.366010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.366612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.366628 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.366642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.366647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.366651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.366655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.366662 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.366666 | orchestrator | 2026-01-09 00:55:20.366670 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-09 00:55:20.366674 | orchestrator | Friday 09 January 2026 00:50:26 +0000 (0:00:01.932) 0:01:59.707 ******** 2026-01-09 00:55:20.366679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-09 00:55:20.366684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-09 00:55:20.366688 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.366692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-09 00:55:20.366698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-09 00:55:20.366702 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.366706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-09 00:55:20.366713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-09 00:55:20.366717 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.366721 | orchestrator | 2026-01-09 00:55:20.366725 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-09 00:55:20.366729 | orchestrator | Friday 09 January 2026 00:50:27 +0000 (0:00:01.359) 0:02:01.067 ******** 2026-01-09 00:55:20.366732 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.366736 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.366740 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.366744 | orchestrator | 2026-01-09 00:55:20.366748 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-09 00:55:20.366751 | orchestrator | Friday 09 January 2026 00:50:29 +0000 (0:00:01.643) 0:02:02.711 ******** 2026-01-09 00:55:20.366755 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.366759 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.366763 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.366766 | orchestrator | 2026-01-09 00:55:20.366770 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-09 00:55:20.366774 | orchestrator | Friday 09 January 2026 00:50:32 +0000 (0:00:02.880) 0:02:05.592 ******** 2026-01-09 00:55:20.366779 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.366785 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.366790 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.366796 | orchestrator | 2026-01-09 00:55:20.366803 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-09 00:55:20.366812 | orchestrator | Friday 09 January 2026 00:50:32 +0000 (0:00:00.662) 0:02:06.254 ******** 2026-01-09 00:55:20.366818 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.366825 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.366831 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.366890 | orchestrator | 2026-01-09 00:55:20.366898 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-09 00:55:20.366904 | orchestrator | Friday 09 January 2026 00:50:33 +0000 (0:00:00.367) 0:02:06.621 ******** 2026-01-09 00:55:20.366916 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.366921 | orchestrator | 2026-01-09 00:55:20.366927 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-09 00:55:20.366932 | orchestrator | Friday 09 January 2026 00:50:33 +0000 (0:00:00.840) 0:02:07.462 ******** 2026-01-09 00:55:20.366939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 00:55:20.366947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 00:55:20.366957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.366969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.366976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.366982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.366992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.366999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 00:55:20.367006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 00:55:20.367020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 00:55:20.367063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 00:55:20.367074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367109 | orchestrator | 2026-01-09 00:55:20.367116 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-09 00:55:20.367123 | orchestrator | Friday 09 January 2026 00:50:40 +0000 (0:00:06.151) 0:02:13.613 ******** 2026-01-09 00:55:20.367133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 00:55:20.367145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 00:55:20.367152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367193 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.367205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 00:55:20.367212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 00:55:20.367224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367258 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.367269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 00:55:20.367281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 00:55:20.367288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.367332 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.367339 | orchestrator | 2026-01-09 00:55:20.367345 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-09 00:55:20.367352 | orchestrator | Friday 09 January 2026 00:50:41 +0000 (0:00:01.127) 0:02:14.741 ******** 2026-01-09 00:55:20.367360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-09 00:55:20.367368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-09 00:55:20.367375 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.367382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-09 00:55:20.367388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-09 00:55:20.367394 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.367401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-09 00:55:20.367406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-09 00:55:20.367412 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.367418 | orchestrator | 2026-01-09 00:55:20.367424 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-09 00:55:20.367430 | orchestrator | Friday 09 January 2026 00:50:42 +0000 (0:00:01.220) 0:02:15.962 ******** 2026-01-09 00:55:20.367435 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.367441 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.367447 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.367453 | orchestrator | 2026-01-09 00:55:20.367459 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-09 00:55:20.367464 | orchestrator | Friday 09 January 2026 00:50:44 +0000 (0:00:01.707) 0:02:17.670 ******** 2026-01-09 00:55:20.367470 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.367476 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.367482 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.367488 | orchestrator | 2026-01-09 00:55:20.367494 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-09 00:55:20.367499 | orchestrator | Friday 09 January 2026 00:50:46 +0000 (0:00:01.883) 0:02:19.554 ******** 2026-01-09 00:55:20.367505 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.367512 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.367516 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.367520 | orchestrator | 2026-01-09 00:55:20.367524 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-09 00:55:20.367528 | orchestrator | Friday 09 January 2026 00:50:46 +0000 (0:00:00.546) 0:02:20.100 ******** 2026-01-09 00:55:20.367532 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.367536 | orchestrator | 2026-01-09 00:55:20.367540 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-09 00:55:20.367544 | orchestrator | Friday 09 January 2026 00:50:47 +0000 (0:00:00.827) 0:02:20.928 ******** 2026-01-09 00:55:20.367559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 00:55:20.367570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.367579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 00:55:20.367598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.367605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 00:55:20.367624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.367631 | orchestrator | 2026-01-09 00:55:20.367636 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-09 00:55:20.367642 | orchestrator | Friday 09 January 2026 00:50:54 +0000 (0:00:06.835) 0:02:27.764 ******** 2026-01-09 00:55:20.367648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-09 00:55:20.367671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.367678 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.367684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-09 00:55:20.367702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.367711 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.367717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-09 00:55:20.367727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.367743 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.367749 | orchestrator | 2026-01-09 00:55:20.367755 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-09 00:55:20.367762 | orchestrator | Friday 09 January 2026 00:50:58 +0000 (0:00:04.486) 0:02:32.250 ******** 2026-01-09 00:55:20.367770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-09 00:55:20.367777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-09 00:55:20.367784 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.367791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-09 00:55:20.367814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-09 00:55:20.367826 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.367832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-09 00:55:20.367859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-09 00:55:20.367866 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.367874 | orchestrator | 2026-01-09 00:55:20.367880 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-09 00:55:20.367886 | orchestrator | Friday 09 January 2026 00:51:02 +0000 (0:00:03.778) 0:02:36.029 ******** 2026-01-09 00:55:20.367892 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.367898 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.367904 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.367910 | orchestrator | 2026-01-09 00:55:20.367920 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-09 00:55:20.367926 | orchestrator | Friday 09 January 2026 00:51:03 +0000 (0:00:01.353) 0:02:37.382 ******** 2026-01-09 00:55:20.367932 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.367938 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.367945 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.367951 | orchestrator | 2026-01-09 00:55:20.367957 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-09 00:55:20.367969 | orchestrator | Friday 09 January 2026 00:51:06 +0000 (0:00:02.209) 0:02:39.591 ******** 2026-01-09 00:55:20.367975 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.367981 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.367987 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.367993 | orchestrator | 2026-01-09 00:55:20.367999 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-09 00:55:20.368005 | orchestrator | Friday 09 January 2026 00:51:06 +0000 (0:00:00.570) 0:02:40.162 ******** 2026-01-09 00:55:20.368011 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.368017 | orchestrator | 2026-01-09 00:55:20.368024 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-09 00:55:20.368030 | orchestrator | Friday 09 January 2026 00:51:07 +0000 (0:00:00.917) 0:02:41.080 ******** 2026-01-09 00:55:20.368294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 00:55:20.368306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 00:55:20.368318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 00:55:20.368322 | orchestrator | 2026-01-09 00:55:20.368326 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-09 00:55:20.368330 | orchestrator | Friday 09 January 2026 00:51:11 +0000 (0:00:03.540) 0:02:44.620 ******** 2026-01-09 00:55:20.368334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-09 00:55:20.368343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-09 00:55:20.368347 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.368351 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.368355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-09 00:55:20.368359 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.368363 | orchestrator | 2026-01-09 00:55:20.368370 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-09 00:55:20.368374 | orchestrator | Friday 09 January 2026 00:51:11 +0000 (0:00:00.664) 0:02:45.284 ******** 2026-01-09 00:55:20.368382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-09 00:55:20.368386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-09 00:55:20.368390 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.368394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-09 00:55:20.368398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-09 00:55:20.368402 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.368405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-09 00:55:20.368409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-09 00:55:20.368413 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.368417 | orchestrator | 2026-01-09 00:55:20.368422 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-09 00:55:20.368429 | orchestrator | Friday 09 January 2026 00:51:12 +0000 (0:00:00.660) 0:02:45.944 ******** 2026-01-09 00:55:20.368435 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.368440 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.368446 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.368452 | orchestrator | 2026-01-09 00:55:20.368458 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-09 00:55:20.368464 | orchestrator | Friday 09 January 2026 00:51:13 +0000 (0:00:01.280) 0:02:47.225 ******** 2026-01-09 00:55:20.368470 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.368476 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.368481 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.368486 | orchestrator | 2026-01-09 00:55:20.368492 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-09 00:55:20.368498 | orchestrator | Friday 09 January 2026 00:51:15 +0000 (0:00:02.101) 0:02:49.326 ******** 2026-01-09 00:55:20.368503 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.368509 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.368514 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.368520 | orchestrator | 2026-01-09 00:55:20.368526 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-09 00:55:20.368532 | orchestrator | Friday 09 January 2026 00:51:16 +0000 (0:00:00.425) 0:02:49.751 ******** 2026-01-09 00:55:20.368538 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.368544 | orchestrator | 2026-01-09 00:55:20.368550 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-09 00:55:20.368556 | orchestrator | Friday 09 January 2026 00:51:17 +0000 (0:00:00.824) 0:02:50.576 ******** 2026-01-09 00:55:20.368575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 00:55:20.368590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 00:55:20.368607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 00:55:20.368620 | orchestrator | 2026-01-09 00:55:20.368628 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-09 00:55:20.368634 | orchestrator | Friday 09 January 2026 00:51:20 +0000 (0:00:03.230) 0:02:53.806 ******** 2026-01-09 00:55:20.368644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-09 00:55:20.368656 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.368666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-09 00:55:20.368674 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.368683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-09 00:55:20.368696 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.368703 | orchestrator | 2026-01-09 00:55:20.368710 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-09 00:55:20.368716 | orchestrator | Friday 09 January 2026 00:51:21 +0000 (0:00:01.250) 0:02:55.057 ******** 2026-01-09 00:55:20.368727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-09 00:55:20.368735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-09 00:55:20.368743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-09 00:55:20.368750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-09 00:55:20.368758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-09 00:55:20.368765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-09 00:55:20.368772 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.368779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-09 00:55:20.368785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-09 00:55:20.368792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-09 00:55:20.368806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-09 00:55:20.368813 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.368822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-09 00:55:20.368828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-09 00:55:20.368834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-09 00:55:20.368865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-09 00:55:20.368872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-09 00:55:20.368879 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.368884 | orchestrator | 2026-01-09 00:55:20.368890 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-09 00:55:20.368897 | orchestrator | Friday 09 January 2026 00:51:22 +0000 (0:00:01.073) 0:02:56.131 ******** 2026-01-09 00:55:20.368903 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.368909 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.368914 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.368918 | orchestrator | 2026-01-09 00:55:20.368922 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-09 00:55:20.368928 | orchestrator | Friday 09 January 2026 00:51:24 +0000 (0:00:01.460) 0:02:57.591 ******** 2026-01-09 00:55:20.368934 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.368940 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.368947 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.368952 | orchestrator | 2026-01-09 00:55:20.368959 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-09 00:55:20.368965 | orchestrator | Friday 09 January 2026 00:51:26 +0000 (0:00:02.375) 0:02:59.967 ******** 2026-01-09 00:55:20.368971 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.368977 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.368984 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.368990 | orchestrator | 2026-01-09 00:55:20.368997 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-09 00:55:20.369004 | orchestrator | Friday 09 January 2026 00:51:26 +0000 (0:00:00.320) 0:03:00.287 ******** 2026-01-09 00:55:20.369010 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.369017 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.369023 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.369029 | orchestrator | 2026-01-09 00:55:20.369034 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-09 00:55:20.369039 | orchestrator | Friday 09 January 2026 00:51:27 +0000 (0:00:00.593) 0:03:00.880 ******** 2026-01-09 00:55:20.369049 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.369054 | orchestrator | 2026-01-09 00:55:20.369059 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-09 00:55:20.369063 | orchestrator | Friday 09 January 2026 00:51:28 +0000 (0:00:01.049) 0:03:01.930 ******** 2026-01-09 00:55:20.369069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 00:55:20.369079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 00:55:20.369084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 00:55:20.369095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 00:55:20.369100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 00:55:20.369108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 00:55:20.369115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 00:55:20.369119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 00:55:20.369126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 00:55:20.369131 | orchestrator | 2026-01-09 00:55:20.369134 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-09 00:55:20.369138 | orchestrator | Friday 09 January 2026 00:51:32 +0000 (0:00:03.940) 0:03:05.870 ******** 2026-01-09 00:55:20.369142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 00:55:20.369150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 00:55:20.369154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 00:55:20.369158 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.369165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 00:55:20.369173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 00:55:20.369177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 00:55:20.369184 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.369188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 00:55:20.369192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 00:55:20.369199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 00:55:20.369203 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.369207 | orchestrator | 2026-01-09 00:55:20.369210 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-09 00:55:20.369214 | orchestrator | Friday 09 January 2026 00:51:32 +0000 (0:00:00.573) 0:03:06.443 ******** 2026-01-09 00:55:20.369218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-09 00:55:20.369223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-09 00:55:20.369227 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.369233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-09 00:55:20.369237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-09 00:55:20.369241 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.369245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-09 00:55:20.369256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-09 00:55:20.369260 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.369264 | orchestrator | 2026-01-09 00:55:20.369268 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-09 00:55:20.369272 | orchestrator | Friday 09 January 2026 00:51:33 +0000 (0:00:00.764) 0:03:07.207 ******** 2026-01-09 00:55:20.369275 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.369279 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.369283 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.369286 | orchestrator | 2026-01-09 00:55:20.369290 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-09 00:55:20.369294 | orchestrator | Friday 09 January 2026 00:51:35 +0000 (0:00:01.418) 0:03:08.626 ******** 2026-01-09 00:55:20.369298 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.369301 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.369305 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.369309 | orchestrator | 2026-01-09 00:55:20.369313 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-09 00:55:20.369316 | orchestrator | Friday 09 January 2026 00:51:37 +0000 (0:00:02.078) 0:03:10.705 ******** 2026-01-09 00:55:20.369320 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.369324 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.369328 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.369331 | orchestrator | 2026-01-09 00:55:20.369335 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-09 00:55:20.369339 | orchestrator | Friday 09 January 2026 00:51:38 +0000 (0:00:00.820) 0:03:11.526 ******** 2026-01-09 00:55:20.369342 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.369346 | orchestrator | 2026-01-09 00:55:20.369350 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-09 00:55:20.369354 | orchestrator | Friday 09 January 2026 00:51:39 +0000 (0:00:01.129) 0:03:12.655 ******** 2026-01-09 00:55:20.369361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 00:55:20.369366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 00:55:20.369389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 00:55:20.369404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369410 | orchestrator | 2026-01-09 00:55:20.369416 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-09 00:55:20.369422 | orchestrator | Friday 09 January 2026 00:51:43 +0000 (0:00:04.628) 0:03:17.284 ******** 2026-01-09 00:55:20.369428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 00:55:20.369443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369449 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.369455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 00:55:20.369461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369467 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.369478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 00:55:20.369490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369497 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.369504 | orchestrator | 2026-01-09 00:55:20.369513 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-09 00:55:20.369520 | orchestrator | Friday 09 January 2026 00:51:45 +0000 (0:00:01.454) 0:03:18.738 ******** 2026-01-09 00:55:20.369527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-09 00:55:20.369533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-09 00:55:20.369538 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.369541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-09 00:55:20.369545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-09 00:55:20.369549 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.369553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-09 00:55:20.369557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-09 00:55:20.369561 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.369565 | orchestrator | 2026-01-09 00:55:20.369568 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-09 00:55:20.369572 | orchestrator | Friday 09 January 2026 00:51:46 +0000 (0:00:01.101) 0:03:19.840 ******** 2026-01-09 00:55:20.369576 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.369580 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.369583 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.369587 | orchestrator | 2026-01-09 00:55:20.369591 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-09 00:55:20.369595 | orchestrator | Friday 09 January 2026 00:51:47 +0000 (0:00:01.535) 0:03:21.375 ******** 2026-01-09 00:55:20.369598 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.369602 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.369606 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.369610 | orchestrator | 2026-01-09 00:55:20.369614 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-09 00:55:20.369617 | orchestrator | Friday 09 January 2026 00:51:50 +0000 (0:00:02.211) 0:03:23.586 ******** 2026-01-09 00:55:20.369621 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.369625 | orchestrator | 2026-01-09 00:55:20.369628 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-09 00:55:20.369637 | orchestrator | Friday 09 January 2026 00:51:51 +0000 (0:00:01.334) 0:03:24.920 ******** 2026-01-09 00:55:20.369645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-09 00:55:20.369649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-09 00:55:20.369669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-09 00:55:20.369694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369709 | orchestrator | 2026-01-09 00:55:20.369713 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-09 00:55:20.369716 | orchestrator | Friday 09 January 2026 00:51:55 +0000 (0:00:03.718) 0:03:28.639 ******** 2026-01-09 00:55:20.369723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-09 00:55:20.369727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369930 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.369937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-09 00:55:20.369950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.369974 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.369986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-09 00:55:20.369992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370057 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370063 | orchestrator | 2026-01-09 00:55:20.370067 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-09 00:55:20.370072 | orchestrator | Friday 09 January 2026 00:51:55 +0000 (0:00:00.711) 0:03:29.351 ******** 2026-01-09 00:55:20.370076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-09 00:55:20.370081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-09 00:55:20.370089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-09 00:55:20.370093 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.370096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-09 00:55:20.370100 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.370104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-09 00:55:20.370108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-09 00:55:20.370112 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370116 | orchestrator | 2026-01-09 00:55:20.370119 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-09 00:55:20.370123 | orchestrator | Friday 09 January 2026 00:51:57 +0000 (0:00:01.293) 0:03:30.645 ******** 2026-01-09 00:55:20.370127 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.370134 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.370139 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.370142 | orchestrator | 2026-01-09 00:55:20.370147 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-09 00:55:20.370151 | orchestrator | Friday 09 January 2026 00:51:58 +0000 (0:00:01.262) 0:03:31.907 ******** 2026-01-09 00:55:20.370155 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.370159 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.370163 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.370166 | orchestrator | 2026-01-09 00:55:20.370170 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-09 00:55:20.370174 | orchestrator | Friday 09 January 2026 00:52:00 +0000 (0:00:02.270) 0:03:34.178 ******** 2026-01-09 00:55:20.370178 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.370187 | orchestrator | 2026-01-09 00:55:20.370191 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-09 00:55:20.370195 | orchestrator | Friday 09 January 2026 00:52:02 +0000 (0:00:01.300) 0:03:35.479 ******** 2026-01-09 00:55:20.370199 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-09 00:55:20.370203 | orchestrator | 2026-01-09 00:55:20.370207 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-09 00:55:20.370211 | orchestrator | Friday 09 January 2026 00:52:05 +0000 (0:00:03.193) 0:03:38.672 ******** 2026-01-09 00:55:20.370216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:55:20.370227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:55:20.370236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-09 00:55:20.370240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-09 00:55:20.370244 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.370248 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.370257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:55:20.370265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-09 00:55:20.370269 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370277 | orchestrator | 2026-01-09 00:55:20.370281 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-09 00:55:20.370284 | orchestrator | Friday 09 January 2026 00:52:07 +0000 (0:00:02.405) 0:03:41.078 ******** 2026-01-09 00:55:20.370288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:55:20.370293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-09 00:55:20.370297 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.370307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:55:20.370315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-09 00:55:20.370319 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.370327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:55:20.370331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-09 00:55:20.370335 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370339 | orchestrator | 2026-01-09 00:55:20.370342 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-09 00:55:20.370346 | orchestrator | Friday 09 January 2026 00:52:10 +0000 (0:00:02.842) 0:03:43.921 ******** 2026-01-09 00:55:20.370355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-09 00:55:20.370360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-09 00:55:20.370364 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.370368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-09 00:55:20.370372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-09 00:55:20.370376 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.370380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-09 00:55:20.370385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-09 00:55:20.370389 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370392 | orchestrator | 2026-01-09 00:55:20.370396 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-09 00:55:20.370400 | orchestrator | Friday 09 January 2026 00:52:14 +0000 (0:00:03.622) 0:03:47.543 ******** 2026-01-09 00:55:20.370407 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.370410 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.370414 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.370418 | orchestrator | 2026-01-09 00:55:20.370422 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-09 00:55:20.370425 | orchestrator | Friday 09 January 2026 00:52:15 +0000 (0:00:01.898) 0:03:49.442 ******** 2026-01-09 00:55:20.370429 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.370433 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.370437 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370440 | orchestrator | 2026-01-09 00:55:20.370444 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-09 00:55:20.370448 | orchestrator | Friday 09 January 2026 00:52:17 +0000 (0:00:01.482) 0:03:50.924 ******** 2026-01-09 00:55:20.370453 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.370457 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.370461 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370465 | orchestrator | 2026-01-09 00:55:20.370468 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-09 00:55:20.370472 | orchestrator | Friday 09 January 2026 00:52:17 +0000 (0:00:00.371) 0:03:51.296 ******** 2026-01-09 00:55:20.370476 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.370480 | orchestrator | 2026-01-09 00:55:20.370484 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-09 00:55:20.370487 | orchestrator | Friday 09 January 2026 00:52:19 +0000 (0:00:01.363) 0:03:52.659 ******** 2026-01-09 00:55:20.370491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-09 00:55:20.370513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-09 00:55:20.370517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-09 00:55:20.370525 | orchestrator | 2026-01-09 00:55:20.370529 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-09 00:55:20.370533 | orchestrator | Friday 09 January 2026 00:52:20 +0000 (0:00:01.602) 0:03:54.262 ******** 2026-01-09 00:55:20.370539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-09 00:55:20.370547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-09 00:55:20.370552 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.370555 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.370559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-09 00:55:20.370563 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370567 | orchestrator | 2026-01-09 00:55:20.370571 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-09 00:55:20.370575 | orchestrator | Friday 09 January 2026 00:52:21 +0000 (0:00:00.410) 0:03:54.673 ******** 2026-01-09 00:55:20.370579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-09 00:55:20.370583 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.370587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-09 00:55:20.370590 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.370594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-09 00:55:20.370602 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370606 | orchestrator | 2026-01-09 00:55:20.370609 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-09 00:55:20.370613 | orchestrator | Friday 09 January 2026 00:52:22 +0000 (0:00:00.924) 0:03:55.597 ******** 2026-01-09 00:55:20.370617 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.370621 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.370625 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370628 | orchestrator | 2026-01-09 00:55:20.370632 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-09 00:55:20.370636 | orchestrator | Friday 09 January 2026 00:52:22 +0000 (0:00:00.461) 0:03:56.059 ******** 2026-01-09 00:55:20.370640 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.370643 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.370647 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370651 | orchestrator | 2026-01-09 00:55:20.370658 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-09 00:55:20.370662 | orchestrator | Friday 09 January 2026 00:52:23 +0000 (0:00:01.397) 0:03:57.457 ******** 2026-01-09 00:55:20.370666 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.370669 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.370673 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.370677 | orchestrator | 2026-01-09 00:55:20.370681 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-09 00:55:20.370684 | orchestrator | Friday 09 January 2026 00:52:24 +0000 (0:00:00.328) 0:03:57.786 ******** 2026-01-09 00:55:20.370688 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.370692 | orchestrator | 2026-01-09 00:55:20.370696 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-09 00:55:20.370699 | orchestrator | Friday 09 January 2026 00:52:25 +0000 (0:00:01.444) 0:03:59.230 ******** 2026-01-09 00:55:20.370706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 00:55:20.370711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-09 00:55:20.370734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.370747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 00:55:20.370755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.370759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.370782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-09 00:55:20.370800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-09 00:55:20.370805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.370811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.370828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.370832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.370857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.370864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.370877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 00:55:20.370885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-09 00:55:20.370892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.370898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.370920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.370933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-09 00:55:20.370944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.370957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.370963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.370974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.371257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.371299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.371303 | orchestrator | 2026-01-09 00:55:20.371307 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-09 00:55:20.371311 | orchestrator | Friday 09 January 2026 00:52:30 +0000 (0:00:04.494) 0:04:03.725 ******** 2026-01-09 00:55:20.371327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 00:55:20.371337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-09 00:55:20.371357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 00:55:20.371397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.371401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-09 00:55:20.371461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.371474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.371485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 00:55:20.371514 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.371519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.371543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-09 00:55:20.371599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.371654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.371673 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.371683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.371864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-09 00:55:20.371887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.371894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-09 00:55:20.371910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-09 00:55:20.371916 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.371922 | orchestrator | 2026-01-09 00:55:20.371929 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-09 00:55:20.371936 | orchestrator | Friday 09 January 2026 00:52:31 +0000 (0:00:01.544) 0:04:05.269 ******** 2026-01-09 00:55:20.371942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-09 00:55:20.371949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-09 00:55:20.371956 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.371965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-09 00:55:20.371971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-09 00:55:20.371978 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.371984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-09 00:55:20.371990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-09 00:55:20.371996 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.372002 | orchestrator | 2026-01-09 00:55:20.372008 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-09 00:55:20.372014 | orchestrator | Friday 09 January 2026 00:52:33 +0000 (0:00:02.076) 0:04:07.346 ******** 2026-01-09 00:55:20.372020 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.372026 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.372032 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.372038 | orchestrator | 2026-01-09 00:55:20.372043 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-09 00:55:20.372049 | orchestrator | Friday 09 January 2026 00:52:35 +0000 (0:00:01.553) 0:04:08.899 ******** 2026-01-09 00:55:20.372055 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.372062 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.372067 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.372073 | orchestrator | 2026-01-09 00:55:20.372079 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-09 00:55:20.372085 | orchestrator | Friday 09 January 2026 00:52:37 +0000 (0:00:02.214) 0:04:11.114 ******** 2026-01-09 00:55:20.372091 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.372097 | orchestrator | 2026-01-09 00:55:20.372103 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-09 00:55:20.372109 | orchestrator | Friday 09 January 2026 00:52:38 +0000 (0:00:01.262) 0:04:12.377 ******** 2026-01-09 00:55:20.372122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.372134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.372144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.372150 | orchestrator | 2026-01-09 00:55:20.372156 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-09 00:55:20.372162 | orchestrator | Friday 09 January 2026 00:52:42 +0000 (0:00:03.737) 0:04:16.114 ******** 2026-01-09 00:55:20.372168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.372174 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.372184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.372190 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.372199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.372205 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.372211 | orchestrator | 2026-01-09 00:55:20.372217 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-09 00:55:20.372223 | orchestrator | Friday 09 January 2026 00:52:43 +0000 (0:00:00.558) 0:04:16.673 ******** 2026-01-09 00:55:20.372229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372242 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.372252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372265 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.372271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372282 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.372288 | orchestrator | 2026-01-09 00:55:20.372294 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-09 00:55:20.372305 | orchestrator | Friday 09 January 2026 00:52:44 +0000 (0:00:00.837) 0:04:17.511 ******** 2026-01-09 00:55:20.372311 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.372318 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.372324 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.372330 | orchestrator | 2026-01-09 00:55:20.372337 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-09 00:55:20.372341 | orchestrator | Friday 09 January 2026 00:52:46 +0000 (0:00:02.064) 0:04:19.576 ******** 2026-01-09 00:55:20.372345 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.372349 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.372352 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.372356 | orchestrator | 2026-01-09 00:55:20.372360 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-09 00:55:20.372364 | orchestrator | Friday 09 January 2026 00:52:48 +0000 (0:00:02.052) 0:04:21.628 ******** 2026-01-09 00:55:20.372368 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.372372 | orchestrator | 2026-01-09 00:55:20.372375 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-09 00:55:20.372379 | orchestrator | Friday 09 January 2026 00:52:49 +0000 (0:00:01.551) 0:04:23.179 ******** 2026-01-09 00:55:20.372388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.372395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.372417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.372430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372449 | orchestrator | 2026-01-09 00:55:20.372453 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-09 00:55:20.372457 | orchestrator | Friday 09 January 2026 00:52:53 +0000 (0:00:04.206) 0:04:27.386 ******** 2026-01-09 00:55:20.372461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.372466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372477 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.372483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.372494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372508 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.372521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.372530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.372554 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.372560 | orchestrator | 2026-01-09 00:55:20.372565 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-09 00:55:20.372572 | orchestrator | Friday 09 January 2026 00:52:55 +0000 (0:00:01.299) 0:04:28.686 ******** 2026-01-09 00:55:20.372578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372604 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.372610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372636 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.372642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-09 00:55:20.372679 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.372685 | orchestrator | 2026-01-09 00:55:20.372691 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-09 00:55:20.372703 | orchestrator | Friday 09 January 2026 00:52:56 +0000 (0:00:00.905) 0:04:29.592 ******** 2026-01-09 00:55:20.372709 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.372714 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.372720 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.372726 | orchestrator | 2026-01-09 00:55:20.372731 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-09 00:55:20.372738 | orchestrator | Friday 09 January 2026 00:52:57 +0000 (0:00:01.477) 0:04:31.070 ******** 2026-01-09 00:55:20.372743 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.372750 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.372756 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.372762 | orchestrator | 2026-01-09 00:55:20.372768 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-09 00:55:20.372775 | orchestrator | Friday 09 January 2026 00:52:59 +0000 (0:00:02.163) 0:04:33.233 ******** 2026-01-09 00:55:20.372781 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.372788 | orchestrator | 2026-01-09 00:55:20.372794 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-09 00:55:20.372806 | orchestrator | Friday 09 January 2026 00:53:01 +0000 (0:00:01.649) 0:04:34.882 ******** 2026-01-09 00:55:20.372813 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-09 00:55:20.372820 | orchestrator | 2026-01-09 00:55:20.372826 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-09 00:55:20.372830 | orchestrator | Friday 09 January 2026 00:53:02 +0000 (0:00:00.812) 0:04:35.695 ******** 2026-01-09 00:55:20.372834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-09 00:55:20.372877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-09 00:55:20.372883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-09 00:55:20.372887 | orchestrator | 2026-01-09 00:55:20.372891 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-09 00:55:20.372896 | orchestrator | Friday 09 January 2026 00:53:07 +0000 (0:00:05.003) 0:04:40.699 ******** 2026-01-09 00:55:20.372900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-09 00:55:20.372919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-09 00:55:20.372923 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.372927 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.372933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-09 00:55:20.372940 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.372947 | orchestrator | 2026-01-09 00:55:20.372954 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-09 00:55:20.372960 | orchestrator | Friday 09 January 2026 00:53:08 +0000 (0:00:01.139) 0:04:41.838 ******** 2026-01-09 00:55:20.373017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-09 00:55:20.373023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-09 00:55:20.373028 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.373032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-09 00:55:20.373036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-09 00:55:20.373040 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.373044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-09 00:55:20.373048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-09 00:55:20.373052 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.373055 | orchestrator | 2026-01-09 00:55:20.373059 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-09 00:55:20.373063 | orchestrator | Friday 09 January 2026 00:53:09 +0000 (0:00:01.550) 0:04:43.389 ******** 2026-01-09 00:55:20.373067 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.373072 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.373076 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.373079 | orchestrator | 2026-01-09 00:55:20.373083 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-09 00:55:20.373092 | orchestrator | Friday 09 January 2026 00:53:12 +0000 (0:00:02.792) 0:04:46.181 ******** 2026-01-09 00:55:20.373096 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.373100 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.373104 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.373108 | orchestrator | 2026-01-09 00:55:20.373111 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-09 00:55:20.373115 | orchestrator | Friday 09 January 2026 00:53:15 +0000 (0:00:03.172) 0:04:49.354 ******** 2026-01-09 00:55:20.373120 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-09 00:55:20.373124 | orchestrator | 2026-01-09 00:55:20.373128 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-09 00:55:20.373132 | orchestrator | Friday 09 January 2026 00:53:17 +0000 (0:00:01.504) 0:04:50.859 ******** 2026-01-09 00:55:20.373136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-09 00:55:20.373140 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.373144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-09 00:55:20.373148 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.373165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-09 00:55:20.373170 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.373174 | orchestrator | 2026-01-09 00:55:20.373177 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-09 00:55:20.373182 | orchestrator | Friday 09 January 2026 00:53:18 +0000 (0:00:01.285) 0:04:52.145 ******** 2026-01-09 00:55:20.373186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-09 00:55:20.373190 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.373194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-09 00:55:20.373202 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.373207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-09 00:55:20.373211 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.373215 | orchestrator | 2026-01-09 00:55:20.373218 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-09 00:55:20.373222 | orchestrator | Friday 09 January 2026 00:53:20 +0000 (0:00:01.371) 0:04:53.516 ******** 2026-01-09 00:55:20.373226 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.373230 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.373234 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.373237 | orchestrator | 2026-01-09 00:55:20.373241 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-09 00:55:20.373302 | orchestrator | Friday 09 January 2026 00:53:22 +0000 (0:00:02.065) 0:04:55.582 ******** 2026-01-09 00:55:20.373320 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.373325 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.373328 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.373332 | orchestrator | 2026-01-09 00:55:20.373336 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-09 00:55:20.373340 | orchestrator | Friday 09 January 2026 00:53:24 +0000 (0:00:02.606) 0:04:58.189 ******** 2026-01-09 00:55:20.373344 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.373347 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.373351 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.373355 | orchestrator | 2026-01-09 00:55:20.373359 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-09 00:55:20.373363 | orchestrator | Friday 09 January 2026 00:53:27 +0000 (0:00:02.920) 0:05:01.109 ******** 2026-01-09 00:55:20.373370 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-09 00:55:20.373374 | orchestrator | 2026-01-09 00:55:20.373378 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-09 00:55:20.373382 | orchestrator | Friday 09 January 2026 00:53:28 +0000 (0:00:00.897) 0:05:02.006 ******** 2026-01-09 00:55:20.373386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-09 00:55:20.373390 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.373399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-09 00:55:20.373408 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.373412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-09 00:55:20.373416 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.373420 | orchestrator | 2026-01-09 00:55:20.373424 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-09 00:55:20.373428 | orchestrator | Friday 09 January 2026 00:53:30 +0000 (0:00:01.546) 0:05:03.553 ******** 2026-01-09 00:55:20.373432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-09 00:55:20.373436 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.373440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-09 00:55:20.373444 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.373448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-09 00:55:20.373452 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.373456 | orchestrator | 2026-01-09 00:55:20.373460 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-09 00:55:20.373464 | orchestrator | Friday 09 January 2026 00:53:31 +0000 (0:00:01.410) 0:05:04.963 ******** 2026-01-09 00:55:20.373471 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.373475 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.373479 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.373482 | orchestrator | 2026-01-09 00:55:20.373486 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-09 00:55:20.373490 | orchestrator | Friday 09 January 2026 00:53:33 +0000 (0:00:01.617) 0:05:06.581 ******** 2026-01-09 00:55:20.373494 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.373498 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.373502 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.373505 | orchestrator | 2026-01-09 00:55:20.373509 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-09 00:55:20.373513 | orchestrator | Friday 09 January 2026 00:53:36 +0000 (0:00:02.923) 0:05:09.504 ******** 2026-01-09 00:55:20.373522 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.373525 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.373529 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.373533 | orchestrator | 2026-01-09 00:55:20.373537 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-09 00:55:20.373541 | orchestrator | Friday 09 January 2026 00:53:39 +0000 (0:00:03.461) 0:05:12.965 ******** 2026-01-09 00:55:20.373545 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.373549 | orchestrator | 2026-01-09 00:55:20.373553 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-09 00:55:20.373557 | orchestrator | Friday 09 January 2026 00:53:41 +0000 (0:00:01.677) 0:05:14.643 ******** 2026-01-09 00:55:20.373566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.373570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 00:55:20.373574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.373594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.373602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 00:55:20.373606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.373610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 00:55:20.373628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.373634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.373646 | orchestrator | 2026-01-09 00:55:20.373650 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-09 00:55:20.373654 | orchestrator | Friday 09 January 2026 00:53:44 +0000 (0:00:03.510) 0:05:18.153 ******** 2026-01-09 00:55:20.373658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.373665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 00:55:20.373672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.373700 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.373704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.373708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 00:55:20.373712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.373736 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.373740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.373744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 00:55:20.373748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 00:55:20.373764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 00:55:20.373768 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.373772 | orchestrator | 2026-01-09 00:55:20.373776 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-09 00:55:20.373780 | orchestrator | Friday 09 January 2026 00:53:45 +0000 (0:00:00.780) 0:05:18.933 ******** 2026-01-09 00:55:20.373784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-09 00:55:20.373788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-09 00:55:20.373792 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.373799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-09 00:55:20.373803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-09 00:55:20.373807 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.373811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-09 00:55:20.373815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-09 00:55:20.373819 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.373823 | orchestrator | 2026-01-09 00:55:20.373827 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-09 00:55:20.373831 | orchestrator | Friday 09 January 2026 00:53:47 +0000 (0:00:01.547) 0:05:20.481 ******** 2026-01-09 00:55:20.373835 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.373856 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.373861 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.373865 | orchestrator | 2026-01-09 00:55:20.373869 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-09 00:55:20.373873 | orchestrator | Friday 09 January 2026 00:53:48 +0000 (0:00:01.353) 0:05:21.834 ******** 2026-01-09 00:55:20.373876 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.373880 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.373884 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.373888 | orchestrator | 2026-01-09 00:55:20.373892 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-09 00:55:20.373899 | orchestrator | Friday 09 January 2026 00:53:50 +0000 (0:00:02.168) 0:05:24.002 ******** 2026-01-09 00:55:20.373903 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.373907 | orchestrator | 2026-01-09 00:55:20.373910 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-09 00:55:20.373914 | orchestrator | Friday 09 January 2026 00:53:51 +0000 (0:00:01.403) 0:05:25.406 ******** 2026-01-09 00:55:20.373919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:55:20.373926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:55:20.373937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:55:20.373944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:55:20.373958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:55:20.373971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:55:20.373978 | orchestrator | 2026-01-09 00:55:20.373985 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-09 00:55:20.373991 | orchestrator | Friday 09 January 2026 00:53:57 +0000 (0:00:05.756) 0:05:31.163 ******** 2026-01-09 00:55:20.374001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-09 00:55:20.374008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-09 00:55:20.374081 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-09 00:55:20.374096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-09 00:55:20.374101 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-09 00:55:20.374117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-09 00:55:20.374125 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374129 | orchestrator | 2026-01-09 00:55:20.374133 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-09 00:55:20.374136 | orchestrator | Friday 09 January 2026 00:53:58 +0000 (0:00:00.688) 0:05:31.852 ******** 2026-01-09 00:55:20.374140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-09 00:55:20.374145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-09 00:55:20.374149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-09 00:55:20.374153 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-09 00:55:20.374161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-09 00:55:20.374165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-09 00:55:20.374168 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-09 00:55:20.374179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-09 00:55:20.374183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-09 00:55:20.374187 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374191 | orchestrator | 2026-01-09 00:55:20.374194 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-09 00:55:20.374198 | orchestrator | Friday 09 January 2026 00:53:59 +0000 (0:00:00.946) 0:05:32.798 ******** 2026-01-09 00:55:20.374202 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374206 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374210 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374214 | orchestrator | 2026-01-09 00:55:20.374218 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-09 00:55:20.374221 | orchestrator | Friday 09 January 2026 00:54:00 +0000 (0:00:00.836) 0:05:33.635 ******** 2026-01-09 00:55:20.374225 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374229 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374233 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374241 | orchestrator | 2026-01-09 00:55:20.374247 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-09 00:55:20.374251 | orchestrator | Friday 09 January 2026 00:54:01 +0000 (0:00:01.428) 0:05:35.064 ******** 2026-01-09 00:55:20.374255 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.374259 | orchestrator | 2026-01-09 00:55:20.374263 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-09 00:55:20.374266 | orchestrator | Friday 09 January 2026 00:54:03 +0000 (0:00:01.418) 0:05:36.483 ******** 2026-01-09 00:55:20.374271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-09 00:55:20.374275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 00:55:20.374279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-09 00:55:20.374306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 00:55:20.374310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-09 00:55:20.374314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 00:55:20.374322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-09 00:55:20.374356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-09 00:55:20.374363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-09 00:55:20.374391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-09 00:55:20.374395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-09 00:55:20.374409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-09 00:55:20.374420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374436 | orchestrator | 2026-01-09 00:55:20.374440 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-09 00:55:20.374444 | orchestrator | Friday 09 January 2026 00:54:07 +0000 (0:00:04.679) 0:05:41.162 ******** 2026-01-09 00:55:20.374450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-09 00:55:20.374458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 00:55:20.374465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-09 00:55:20.374485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-09 00:55:20.374492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374507 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-09 00:55:20.374515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 00:55:20.374519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-09 00:55:20.374546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-09 00:55:20.374550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374568 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-09 00:55:20.374579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 00:55:20.374585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-09 00:55:20.374609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-09 00:55:20.374613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 00:55:20.374623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 00:55:20.374627 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374631 | orchestrator | 2026-01-09 00:55:20.374635 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-09 00:55:20.374639 | orchestrator | Friday 09 January 2026 00:54:09 +0000 (0:00:01.322) 0:05:42.485 ******** 2026-01-09 00:55:20.374643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-09 00:55:20.374647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-09 00:55:20.374652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-09 00:55:20.374656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-09 00:55:20.374666 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-09 00:55:20.374674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-09 00:55:20.374678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-09 00:55:20.374685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-09 00:55:20.374689 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-09 00:55:20.374697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-09 00:55:20.374701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-09 00:55:20.374707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-09 00:55:20.374711 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374715 | orchestrator | 2026-01-09 00:55:20.374719 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-09 00:55:20.374723 | orchestrator | Friday 09 January 2026 00:54:10 +0000 (0:00:01.043) 0:05:43.529 ******** 2026-01-09 00:55:20.374727 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374730 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374735 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374738 | orchestrator | 2026-01-09 00:55:20.374742 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-09 00:55:20.374746 | orchestrator | Friday 09 January 2026 00:54:10 +0000 (0:00:00.473) 0:05:44.002 ******** 2026-01-09 00:55:20.374750 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374753 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374757 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374761 | orchestrator | 2026-01-09 00:55:20.374765 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-09 00:55:20.374768 | orchestrator | Friday 09 January 2026 00:54:12 +0000 (0:00:01.518) 0:05:45.521 ******** 2026-01-09 00:55:20.374772 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.374776 | orchestrator | 2026-01-09 00:55:20.374785 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-09 00:55:20.374789 | orchestrator | Friday 09 January 2026 00:54:14 +0000 (0:00:01.961) 0:05:47.482 ******** 2026-01-09 00:55:20.374793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:55:20.374800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:55:20.374805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-09 00:55:20.374809 | orchestrator | 2026-01-09 00:55:20.374813 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-09 00:55:20.374818 | orchestrator | Friday 09 January 2026 00:54:16 +0000 (0:00:02.716) 0:05:50.199 ******** 2026-01-09 00:55:20.374823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-09 00:55:20.374830 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-09 00:55:20.374850 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-09 00:55:20.374862 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374866 | orchestrator | 2026-01-09 00:55:20.374870 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-09 00:55:20.374874 | orchestrator | Friday 09 January 2026 00:54:17 +0000 (0:00:00.418) 0:05:50.617 ******** 2026-01-09 00:55:20.374878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-09 00:55:20.374881 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-09 00:55:20.374889 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-09 00:55:20.374897 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374901 | orchestrator | 2026-01-09 00:55:20.374904 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-09 00:55:20.374908 | orchestrator | Friday 09 January 2026 00:54:18 +0000 (0:00:01.085) 0:05:51.703 ******** 2026-01-09 00:55:20.374912 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374919 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374923 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374931 | orchestrator | 2026-01-09 00:55:20.374935 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-09 00:55:20.374939 | orchestrator | Friday 09 January 2026 00:54:18 +0000 (0:00:00.466) 0:05:52.169 ******** 2026-01-09 00:55:20.374943 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.374946 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.374950 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.374954 | orchestrator | 2026-01-09 00:55:20.374958 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-09 00:55:20.374962 | orchestrator | Friday 09 January 2026 00:54:20 +0000 (0:00:01.447) 0:05:53.616 ******** 2026-01-09 00:55:20.374965 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:55:20.374969 | orchestrator | 2026-01-09 00:55:20.374973 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-09 00:55:20.374977 | orchestrator | Friday 09 January 2026 00:54:22 +0000 (0:00:01.969) 0:05:55.586 ******** 2026-01-09 00:55:20.374981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.374985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.374992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.374999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.375006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.375010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-09 00:55:20.375014 | orchestrator | 2026-01-09 00:55:20.375018 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-09 00:55:20.375022 | orchestrator | Friday 09 January 2026 00:54:28 +0000 (0:00:06.346) 0:06:01.932 ******** 2026-01-09 00:55:20.375028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.375035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.375043 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.375051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.375055 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.375066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-09 00:55:20.375073 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375077 | orchestrator | 2026-01-09 00:55:20.375081 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-09 00:55:20.375121 | orchestrator | Friday 09 January 2026 00:54:29 +0000 (0:00:00.683) 0:06:02.616 ******** 2026-01-09 00:55:20.375125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375157 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375161 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-09 00:55:20.375185 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375189 | orchestrator | 2026-01-09 00:55:20.375193 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-09 00:55:20.375199 | orchestrator | Friday 09 January 2026 00:54:30 +0000 (0:00:01.748) 0:06:04.365 ******** 2026-01-09 00:55:20.375203 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.375207 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.375211 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.375215 | orchestrator | 2026-01-09 00:55:20.375218 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-09 00:55:20.375222 | orchestrator | Friday 09 January 2026 00:54:32 +0000 (0:00:01.397) 0:06:05.763 ******** 2026-01-09 00:55:20.375226 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.375229 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.375233 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.375237 | orchestrator | 2026-01-09 00:55:20.375241 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-09 00:55:20.375244 | orchestrator | Friday 09 January 2026 00:54:34 +0000 (0:00:02.244) 0:06:08.008 ******** 2026-01-09 00:55:20.375248 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375252 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375255 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375259 | orchestrator | 2026-01-09 00:55:20.375263 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-09 00:55:20.375267 | orchestrator | Friday 09 January 2026 00:54:34 +0000 (0:00:00.401) 0:06:08.410 ******** 2026-01-09 00:55:20.375270 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375274 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375278 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375282 | orchestrator | 2026-01-09 00:55:20.375285 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-09 00:55:20.375292 | orchestrator | Friday 09 January 2026 00:54:35 +0000 (0:00:00.378) 0:06:08.788 ******** 2026-01-09 00:55:20.375296 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375300 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375303 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375307 | orchestrator | 2026-01-09 00:55:20.375311 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-09 00:55:20.375315 | orchestrator | Friday 09 January 2026 00:54:36 +0000 (0:00:00.705) 0:06:09.493 ******** 2026-01-09 00:55:20.375318 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375322 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375326 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375330 | orchestrator | 2026-01-09 00:55:20.375333 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-09 00:55:20.375337 | orchestrator | Friday 09 January 2026 00:54:36 +0000 (0:00:00.367) 0:06:09.860 ******** 2026-01-09 00:55:20.375341 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375344 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375348 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375352 | orchestrator | 2026-01-09 00:55:20.375355 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-09 00:55:20.375359 | orchestrator | Friday 09 January 2026 00:54:36 +0000 (0:00:00.320) 0:06:10.181 ******** 2026-01-09 00:55:20.375363 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375367 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375371 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375375 | orchestrator | 2026-01-09 00:55:20.375378 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-09 00:55:20.375382 | orchestrator | Friday 09 January 2026 00:54:37 +0000 (0:00:00.870) 0:06:11.052 ******** 2026-01-09 00:55:20.375386 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.375390 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.375393 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.375401 | orchestrator | 2026-01-09 00:55:20.375405 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-09 00:55:20.375409 | orchestrator | Friday 09 January 2026 00:54:38 +0000 (0:00:00.655) 0:06:11.707 ******** 2026-01-09 00:55:20.375413 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.375417 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.375420 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.375424 | orchestrator | 2026-01-09 00:55:20.375428 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-09 00:55:20.375431 | orchestrator | Friday 09 January 2026 00:54:38 +0000 (0:00:00.353) 0:06:12.060 ******** 2026-01-09 00:55:20.375435 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.375439 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.375442 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.375446 | orchestrator | 2026-01-09 00:55:20.375450 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-09 00:55:20.375454 | orchestrator | Friday 09 January 2026 00:54:39 +0000 (0:00:00.868) 0:06:12.929 ******** 2026-01-09 00:55:20.375457 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.375461 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.375465 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.375469 | orchestrator | 2026-01-09 00:55:20.375473 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-09 00:55:20.375477 | orchestrator | Friday 09 January 2026 00:54:40 +0000 (0:00:01.334) 0:06:14.263 ******** 2026-01-09 00:55:20.375481 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.375484 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.375488 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.375492 | orchestrator | 2026-01-09 00:55:20.375496 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-09 00:55:20.375499 | orchestrator | Friday 09 January 2026 00:54:41 +0000 (0:00:00.878) 0:06:15.142 ******** 2026-01-09 00:55:20.375503 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.375507 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.375510 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.375514 | orchestrator | 2026-01-09 00:55:20.375518 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-09 00:55:20.375522 | orchestrator | Friday 09 January 2026 00:54:49 +0000 (0:00:08.223) 0:06:23.365 ******** 2026-01-09 00:55:20.375526 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.375529 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.375533 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.375537 | orchestrator | 2026-01-09 00:55:20.375540 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-09 00:55:20.375544 | orchestrator | Friday 09 January 2026 00:54:50 +0000 (0:00:00.873) 0:06:24.239 ******** 2026-01-09 00:55:20.375552 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.375555 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.375559 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.375563 | orchestrator | 2026-01-09 00:55:20.375567 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-09 00:55:20.375570 | orchestrator | Friday 09 January 2026 00:55:00 +0000 (0:00:10.085) 0:06:34.325 ******** 2026-01-09 00:55:20.375574 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.375578 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.375581 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.375585 | orchestrator | 2026-01-09 00:55:20.375589 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-09 00:55:20.375592 | orchestrator | Friday 09 January 2026 00:55:05 +0000 (0:00:04.179) 0:06:38.504 ******** 2026-01-09 00:55:20.375596 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:55:20.375600 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:55:20.375604 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:55:20.375607 | orchestrator | 2026-01-09 00:55:20.375611 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-09 00:55:20.375620 | orchestrator | Friday 09 January 2026 00:55:10 +0000 (0:00:05.013) 0:06:43.518 ******** 2026-01-09 00:55:20.375624 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375627 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375631 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375635 | orchestrator | 2026-01-09 00:55:20.375638 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-09 00:55:20.375642 | orchestrator | Friday 09 January 2026 00:55:10 +0000 (0:00:00.419) 0:06:43.938 ******** 2026-01-09 00:55:20.375646 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375652 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375656 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375660 | orchestrator | 2026-01-09 00:55:20.375663 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-09 00:55:20.375667 | orchestrator | Friday 09 January 2026 00:55:10 +0000 (0:00:00.367) 0:06:44.305 ******** 2026-01-09 00:55:20.375671 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375675 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375678 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375682 | orchestrator | 2026-01-09 00:55:20.375686 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-09 00:55:20.375690 | orchestrator | Friday 09 January 2026 00:55:11 +0000 (0:00:00.717) 0:06:45.023 ******** 2026-01-09 00:55:20.375694 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375698 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375702 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375706 | orchestrator | 2026-01-09 00:55:20.375710 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-09 00:55:20.375713 | orchestrator | Friday 09 January 2026 00:55:11 +0000 (0:00:00.437) 0:06:45.461 ******** 2026-01-09 00:55:20.375717 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375721 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375724 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375728 | orchestrator | 2026-01-09 00:55:20.375732 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-09 00:55:20.375735 | orchestrator | Friday 09 January 2026 00:55:12 +0000 (0:00:00.509) 0:06:45.970 ******** 2026-01-09 00:55:20.375739 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:55:20.375743 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:55:20.375746 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:55:20.375750 | orchestrator | 2026-01-09 00:55:20.375754 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-09 00:55:20.375758 | orchestrator | Friday 09 January 2026 00:55:12 +0000 (0:00:00.348) 0:06:46.319 ******** 2026-01-09 00:55:20.375761 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.375765 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.375769 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.375772 | orchestrator | 2026-01-09 00:55:20.375776 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-09 00:55:20.375780 | orchestrator | Friday 09 January 2026 00:55:18 +0000 (0:00:05.261) 0:06:51.580 ******** 2026-01-09 00:55:20.375784 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:55:20.375787 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:55:20.375791 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:55:20.375795 | orchestrator | 2026-01-09 00:55:20.375798 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:55:20.375802 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-09 00:55:20.375807 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-09 00:55:20.375811 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-09 00:55:20.375818 | orchestrator | 2026-01-09 00:55:20.375822 | orchestrator | 2026-01-09 00:55:20.375826 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:55:20.375830 | orchestrator | Friday 09 January 2026 00:55:18 +0000 (0:00:00.877) 0:06:52.458 ******** 2026-01-09 00:55:20.375834 | orchestrator | =============================================================================== 2026-01-09 00:55:20.375852 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.09s 2026-01-09 00:55:20.375856 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.22s 2026-01-09 00:55:20.375860 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.84s 2026-01-09 00:55:20.375864 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.35s 2026-01-09 00:55:20.375867 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.15s 2026-01-09 00:55:20.375875 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.85s 2026-01-09 00:55:20.375879 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.76s 2026-01-09 00:55:20.375883 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.35s 2026-01-09 00:55:20.375886 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.26s 2026-01-09 00:55:20.375890 | orchestrator | loadbalancer : Remove mariadb.cfg if proxysql enabled ------------------- 5.09s 2026-01-09 00:55:20.375894 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 5.01s 2026-01-09 00:55:20.375897 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.00s 2026-01-09 00:55:20.375901 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.72s 2026-01-09 00:55:20.375905 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.68s 2026-01-09 00:55:20.375908 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.63s 2026-01-09 00:55:20.375912 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.59s 2026-01-09 00:55:20.375916 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.49s 2026-01-09 00:55:20.375919 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.49s 2026-01-09 00:55:20.375923 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.21s 2026-01-09 00:55:20.375927 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.21s 2026-01-09 00:55:20.375933 | orchestrator | 2026-01-09 00:55:20 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:20.375937 | orchestrator | 2026-01-09 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:23.410169 | orchestrator | 2026-01-09 00:55:23 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:23.413119 | orchestrator | 2026-01-09 00:55:23 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:23.416709 | orchestrator | 2026-01-09 00:55:23 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:23.416794 | orchestrator | 2026-01-09 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:26.466588 | orchestrator | 2026-01-09 00:55:26 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:26.468430 | orchestrator | 2026-01-09 00:55:26 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:26.469360 | orchestrator | 2026-01-09 00:55:26 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:26.469406 | orchestrator | 2026-01-09 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:29.523094 | orchestrator | 2026-01-09 00:55:29 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:29.524706 | orchestrator | 2026-01-09 00:55:29 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:29.527111 | orchestrator | 2026-01-09 00:55:29 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:29.527165 | orchestrator | 2026-01-09 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:32.614319 | orchestrator | 2026-01-09 00:55:32 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:32.614430 | orchestrator | 2026-01-09 00:55:32 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:32.614869 | orchestrator | 2026-01-09 00:55:32 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:32.614975 | orchestrator | 2026-01-09 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:35.649817 | orchestrator | 2026-01-09 00:55:35 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:35.650294 | orchestrator | 2026-01-09 00:55:35 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:35.651191 | orchestrator | 2026-01-09 00:55:35 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:35.651343 | orchestrator | 2026-01-09 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:38.684597 | orchestrator | 2026-01-09 00:55:38 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:38.685799 | orchestrator | 2026-01-09 00:55:38 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:38.687027 | orchestrator | 2026-01-09 00:55:38 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:38.687079 | orchestrator | 2026-01-09 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:41.732690 | orchestrator | 2026-01-09 00:55:41 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:41.733102 | orchestrator | 2026-01-09 00:55:41 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:41.734566 | orchestrator | 2026-01-09 00:55:41 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:41.734650 | orchestrator | 2026-01-09 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:44.771771 | orchestrator | 2026-01-09 00:55:44 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:44.772274 | orchestrator | 2026-01-09 00:55:44 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:44.773431 | orchestrator | 2026-01-09 00:55:44 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:44.773483 | orchestrator | 2026-01-09 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:47.919591 | orchestrator | 2026-01-09 00:55:47 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:47.919708 | orchestrator | 2026-01-09 00:55:47 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:47.921338 | orchestrator | 2026-01-09 00:55:47 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:47.921410 | orchestrator | 2026-01-09 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:50.966398 | orchestrator | 2026-01-09 00:55:50 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:50.969015 | orchestrator | 2026-01-09 00:55:50 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:50.972059 | orchestrator | 2026-01-09 00:55:50 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:50.972611 | orchestrator | 2026-01-09 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:54.037492 | orchestrator | 2026-01-09 00:55:54 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:54.039418 | orchestrator | 2026-01-09 00:55:54 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:54.040946 | orchestrator | 2026-01-09 00:55:54 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:54.041060 | orchestrator | 2026-01-09 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:55:57.086560 | orchestrator | 2026-01-09 00:55:57 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:55:57.090479 | orchestrator | 2026-01-09 00:55:57 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:55:57.094240 | orchestrator | 2026-01-09 00:55:57 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:55:57.094323 | orchestrator | 2026-01-09 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:00.168576 | orchestrator | 2026-01-09 00:56:00 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:00.175816 | orchestrator | 2026-01-09 00:56:00 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:00.178466 | orchestrator | 2026-01-09 00:56:00 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:00.178810 | orchestrator | 2026-01-09 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:03.223135 | orchestrator | 2026-01-09 00:56:03 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:03.226127 | orchestrator | 2026-01-09 00:56:03 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:03.227777 | orchestrator | 2026-01-09 00:56:03 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:03.227826 | orchestrator | 2026-01-09 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:06.274294 | orchestrator | 2026-01-09 00:56:06 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:06.276152 | orchestrator | 2026-01-09 00:56:06 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:06.277814 | orchestrator | 2026-01-09 00:56:06 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:06.277868 | orchestrator | 2026-01-09 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:09.339809 | orchestrator | 2026-01-09 00:56:09 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:09.340933 | orchestrator | 2026-01-09 00:56:09 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:09.343127 | orchestrator | 2026-01-09 00:56:09 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:09.343191 | orchestrator | 2026-01-09 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:12.397546 | orchestrator | 2026-01-09 00:56:12 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:12.402073 | orchestrator | 2026-01-09 00:56:12 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:12.405430 | orchestrator | 2026-01-09 00:56:12 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:12.405498 | orchestrator | 2026-01-09 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:15.449408 | orchestrator | 2026-01-09 00:56:15 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:15.451916 | orchestrator | 2026-01-09 00:56:15 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:15.455971 | orchestrator | 2026-01-09 00:56:15 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:15.456169 | orchestrator | 2026-01-09 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:18.514444 | orchestrator | 2026-01-09 00:56:18 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:18.515679 | orchestrator | 2026-01-09 00:56:18 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:18.517658 | orchestrator | 2026-01-09 00:56:18 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:18.517683 | orchestrator | 2026-01-09 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:21.567690 | orchestrator | 2026-01-09 00:56:21 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:21.569435 | orchestrator | 2026-01-09 00:56:21 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:21.571736 | orchestrator | 2026-01-09 00:56:21 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:21.571795 | orchestrator | 2026-01-09 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:24.622136 | orchestrator | 2026-01-09 00:56:24 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:24.622769 | orchestrator | 2026-01-09 00:56:24 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:24.623972 | orchestrator | 2026-01-09 00:56:24 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:24.624035 | orchestrator | 2026-01-09 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:27.678760 | orchestrator | 2026-01-09 00:56:27 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:27.678896 | orchestrator | 2026-01-09 00:56:27 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:27.681520 | orchestrator | 2026-01-09 00:56:27 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:27.681584 | orchestrator | 2026-01-09 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:30.719772 | orchestrator | 2026-01-09 00:56:30 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:30.721454 | orchestrator | 2026-01-09 00:56:30 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:30.723116 | orchestrator | 2026-01-09 00:56:30 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:30.723181 | orchestrator | 2026-01-09 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:33.775081 | orchestrator | 2026-01-09 00:56:33 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:33.777217 | orchestrator | 2026-01-09 00:56:33 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:33.779326 | orchestrator | 2026-01-09 00:56:33 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:33.779396 | orchestrator | 2026-01-09 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:36.817175 | orchestrator | 2026-01-09 00:56:36 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:36.819621 | orchestrator | 2026-01-09 00:56:36 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:36.821847 | orchestrator | 2026-01-09 00:56:36 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:36.822167 | orchestrator | 2026-01-09 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:39.874940 | orchestrator | 2026-01-09 00:56:39 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:39.876711 | orchestrator | 2026-01-09 00:56:39 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:39.878212 | orchestrator | 2026-01-09 00:56:39 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:39.878267 | orchestrator | 2026-01-09 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:42.924351 | orchestrator | 2026-01-09 00:56:42 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:42.924444 | orchestrator | 2026-01-09 00:56:42 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:42.925277 | orchestrator | 2026-01-09 00:56:42 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:42.925314 | orchestrator | 2026-01-09 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:45.974459 | orchestrator | 2026-01-09 00:56:45 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:45.975325 | orchestrator | 2026-01-09 00:56:45 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:45.977987 | orchestrator | 2026-01-09 00:56:45 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:45.978705 | orchestrator | 2026-01-09 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:49.047940 | orchestrator | 2026-01-09 00:56:49 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:49.052278 | orchestrator | 2026-01-09 00:56:49 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:49.054902 | orchestrator | 2026-01-09 00:56:49 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:49.054967 | orchestrator | 2026-01-09 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:52.106744 | orchestrator | 2026-01-09 00:56:52 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:52.109590 | orchestrator | 2026-01-09 00:56:52 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:52.111024 | orchestrator | 2026-01-09 00:56:52 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:52.111100 | orchestrator | 2026-01-09 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:55.169481 | orchestrator | 2026-01-09 00:56:55 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:55.170454 | orchestrator | 2026-01-09 00:56:55 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:55.172154 | orchestrator | 2026-01-09 00:56:55 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:55.172205 | orchestrator | 2026-01-09 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:56:58.228932 | orchestrator | 2026-01-09 00:56:58 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:56:58.230281 | orchestrator | 2026-01-09 00:56:58 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:56:58.231764 | orchestrator | 2026-01-09 00:56:58 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:56:58.232641 | orchestrator | 2026-01-09 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:01.277710 | orchestrator | 2026-01-09 00:57:01 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:01.281855 | orchestrator | 2026-01-09 00:57:01 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:01.284988 | orchestrator | 2026-01-09 00:57:01 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:01.285096 | orchestrator | 2026-01-09 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:04.331617 | orchestrator | 2026-01-09 00:57:04 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:04.334884 | orchestrator | 2026-01-09 00:57:04 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:04.339564 | orchestrator | 2026-01-09 00:57:04 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:04.339663 | orchestrator | 2026-01-09 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:07.380102 | orchestrator | 2026-01-09 00:57:07 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:07.381400 | orchestrator | 2026-01-09 00:57:07 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:07.382893 | orchestrator | 2026-01-09 00:57:07 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:07.382956 | orchestrator | 2026-01-09 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:10.425057 | orchestrator | 2026-01-09 00:57:10 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:10.426658 | orchestrator | 2026-01-09 00:57:10 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:10.428482 | orchestrator | 2026-01-09 00:57:10 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:10.428801 | orchestrator | 2026-01-09 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:13.476991 | orchestrator | 2026-01-09 00:57:13 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:13.479073 | orchestrator | 2026-01-09 00:57:13 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:13.480571 | orchestrator | 2026-01-09 00:57:13 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:13.480620 | orchestrator | 2026-01-09 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:16.529404 | orchestrator | 2026-01-09 00:57:16 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:16.529831 | orchestrator | 2026-01-09 00:57:16 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:16.530985 | orchestrator | 2026-01-09 00:57:16 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:16.531023 | orchestrator | 2026-01-09 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:19.576893 | orchestrator | 2026-01-09 00:57:19 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:19.578943 | orchestrator | 2026-01-09 00:57:19 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:19.580788 | orchestrator | 2026-01-09 00:57:19 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:19.581185 | orchestrator | 2026-01-09 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:22.638071 | orchestrator | 2026-01-09 00:57:22 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:22.639672 | orchestrator | 2026-01-09 00:57:22 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:22.641858 | orchestrator | 2026-01-09 00:57:22 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:22.641896 | orchestrator | 2026-01-09 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:25.700016 | orchestrator | 2026-01-09 00:57:25 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:25.703680 | orchestrator | 2026-01-09 00:57:25 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:25.707227 | orchestrator | 2026-01-09 00:57:25 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:25.707297 | orchestrator | 2026-01-09 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:28.759623 | orchestrator | 2026-01-09 00:57:28 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:28.760931 | orchestrator | 2026-01-09 00:57:28 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:28.762639 | orchestrator | 2026-01-09 00:57:28 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:28.762690 | orchestrator | 2026-01-09 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:31.806816 | orchestrator | 2026-01-09 00:57:31 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:31.808697 | orchestrator | 2026-01-09 00:57:31 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:31.810521 | orchestrator | 2026-01-09 00:57:31 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:31.810595 | orchestrator | 2026-01-09 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:34.861935 | orchestrator | 2026-01-09 00:57:34 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:34.863693 | orchestrator | 2026-01-09 00:57:34 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:34.865909 | orchestrator | 2026-01-09 00:57:34 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:34.865970 | orchestrator | 2026-01-09 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:37.909120 | orchestrator | 2026-01-09 00:57:37 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:37.911069 | orchestrator | 2026-01-09 00:57:37 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:37.912379 | orchestrator | 2026-01-09 00:57:37 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:37.912411 | orchestrator | 2026-01-09 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:40.945858 | orchestrator | 2026-01-09 00:57:40 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:40.947601 | orchestrator | 2026-01-09 00:57:40 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:40.948742 | orchestrator | 2026-01-09 00:57:40 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:40.948781 | orchestrator | 2026-01-09 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:43.997385 | orchestrator | 2026-01-09 00:57:43 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:44.000808 | orchestrator | 2026-01-09 00:57:43 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:44.004364 | orchestrator | 2026-01-09 00:57:44 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:44.004454 | orchestrator | 2026-01-09 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:47.058406 | orchestrator | 2026-01-09 00:57:47 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:47.060912 | orchestrator | 2026-01-09 00:57:47 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:47.061943 | orchestrator | 2026-01-09 00:57:47 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:47.061969 | orchestrator | 2026-01-09 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:50.107546 | orchestrator | 2026-01-09 00:57:50 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:50.111060 | orchestrator | 2026-01-09 00:57:50 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:50.112748 | orchestrator | 2026-01-09 00:57:50 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:50.112775 | orchestrator | 2026-01-09 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:53.168569 | orchestrator | 2026-01-09 00:57:53 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:53.171325 | orchestrator | 2026-01-09 00:57:53 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state STARTED 2026-01-09 00:57:53.173647 | orchestrator | 2026-01-09 00:57:53 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:53.173728 | orchestrator | 2026-01-09 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:56.231568 | orchestrator | 2026-01-09 00:57:56 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:57:56.231751 | orchestrator | 2026-01-09 00:57:56 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:56.236527 | orchestrator | 2026-01-09 00:57:56 | INFO  | Task 8e104a68-296b-4a8f-909b-9c80538d72d6 is in state SUCCESS 2026-01-09 00:57:56.237919 | orchestrator | 2026-01-09 00:57:56.237978 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-09 00:57:56.237989 | orchestrator | 2.16.14 2026-01-09 00:57:56.237995 | orchestrator | 2026-01-09 00:57:56.237999 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-09 00:57:56.238005 | orchestrator | 2026-01-09 00:57:56.238009 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-09 00:57:56.238045 | orchestrator | Friday 09 January 2026 00:46:06 +0000 (0:00:00.857) 0:00:00.857 ******** 2026-01-09 00:57:56.238050 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.238057 | orchestrator | 2026-01-09 00:57:56.238079 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-09 00:57:56.238083 | orchestrator | Friday 09 January 2026 00:46:07 +0000 (0:00:01.183) 0:00:02.040 ******** 2026-01-09 00:57:56.238087 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.238122 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.238127 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.238131 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.238135 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.238139 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.238142 | orchestrator | 2026-01-09 00:57:56.238146 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-09 00:57:56.238150 | orchestrator | Friday 09 January 2026 00:46:09 +0000 (0:00:01.601) 0:00:03.643 ******** 2026-01-09 00:57:56.238154 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.238158 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.238161 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.238165 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.238205 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.238210 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.238214 | orchestrator | 2026-01-09 00:57:56.238217 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-09 00:57:56.238221 | orchestrator | Friday 09 January 2026 00:46:09 +0000 (0:00:00.644) 0:00:04.288 ******** 2026-01-09 00:57:56.238225 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.238229 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.238232 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.238236 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.238240 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.238243 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.238247 | orchestrator | 2026-01-09 00:57:56.238251 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-09 00:57:56.238255 | orchestrator | Friday 09 January 2026 00:46:10 +0000 (0:00:00.974) 0:00:05.262 ******** 2026-01-09 00:57:56.238258 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.238262 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.238266 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.238270 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.238273 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.238277 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.238281 | orchestrator | 2026-01-09 00:57:56.238285 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-09 00:57:56.238288 | orchestrator | Friday 09 January 2026 00:46:11 +0000 (0:00:00.618) 0:00:05.881 ******** 2026-01-09 00:57:56.238292 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.238296 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.238300 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.238303 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.238307 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.238311 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.238314 | orchestrator | 2026-01-09 00:57:56.238318 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-09 00:57:56.238322 | orchestrator | Friday 09 January 2026 00:46:12 +0000 (0:00:00.543) 0:00:06.425 ******** 2026-01-09 00:57:56.238326 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.238330 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.238336 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.238342 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.238369 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.238376 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.238381 | orchestrator | 2026-01-09 00:57:56.238387 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-09 00:57:56.238393 | orchestrator | Friday 09 January 2026 00:46:12 +0000 (0:00:00.821) 0:00:07.247 ******** 2026-01-09 00:57:56.238398 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.238405 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.238425 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.238440 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.238446 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.238451 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.238457 | orchestrator | 2026-01-09 00:57:56.238518 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-09 00:57:56.238524 | orchestrator | Friday 09 January 2026 00:46:13 +0000 (0:00:00.802) 0:00:08.049 ******** 2026-01-09 00:57:56.238528 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.238533 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.238537 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.238541 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.238546 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.238551 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.238555 | orchestrator | 2026-01-09 00:57:56.238560 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-09 00:57:56.238564 | orchestrator | Friday 09 January 2026 00:46:14 +0000 (0:00:00.880) 0:00:08.929 ******** 2026-01-09 00:57:56.238568 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-09 00:57:56.238587 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-09 00:57:56.238591 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-09 00:57:56.238596 | orchestrator | 2026-01-09 00:57:56.238600 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-09 00:57:56.238605 | orchestrator | Friday 09 January 2026 00:46:15 +0000 (0:00:00.593) 0:00:09.522 ******** 2026-01-09 00:57:56.238609 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.238613 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.238618 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.238650 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.238663 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.238675 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.238682 | orchestrator | 2026-01-09 00:57:56.238688 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-09 00:57:56.238694 | orchestrator | Friday 09 January 2026 00:46:16 +0000 (0:00:01.543) 0:00:11.066 ******** 2026-01-09 00:57:56.238701 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-09 00:57:56.238706 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-09 00:57:56.238712 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-09 00:57:56.238718 | orchestrator | 2026-01-09 00:57:56.238724 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-09 00:57:56.238730 | orchestrator | Friday 09 January 2026 00:46:19 +0000 (0:00:03.209) 0:00:14.275 ******** 2026-01-09 00:57:56.238756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-09 00:57:56.238763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-09 00:57:56.238770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-09 00:57:56.238776 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.238782 | orchestrator | 2026-01-09 00:57:56.238788 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-09 00:57:56.238850 | orchestrator | Friday 09 January 2026 00:46:20 +0000 (0:00:00.690) 0:00:14.966 ******** 2026-01-09 00:57:56.238860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.238869 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.238875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.238889 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.238895 | orchestrator | 2026-01-09 00:57:56.238902 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-09 00:57:56.238908 | orchestrator | Friday 09 January 2026 00:46:21 +0000 (0:00:01.093) 0:00:16.060 ******** 2026-01-09 00:57:56.238917 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.238927 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.238934 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.238941 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.238947 | orchestrator | 2026-01-09 00:57:56.238972 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-09 00:57:56.238979 | orchestrator | Friday 09 January 2026 00:46:22 +0000 (0:00:00.384) 0:00:16.444 ******** 2026-01-09 00:57:56.239001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-09 00:46:17.347181', 'end': '2026-01-09 00:46:17.663605', 'delta': '0:00:00.316424', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.239031 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-09 00:46:18.666079', 'end': '2026-01-09 00:46:18.957437', 'delta': '0:00:00.291358', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.239038 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-09 00:46:19.453353', 'end': '2026-01-09 00:46:19.768856', 'delta': '0:00:00.315503', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.239159 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239170 | orchestrator | 2026-01-09 00:57:56.239201 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-09 00:57:56.239205 | orchestrator | Friday 09 January 2026 00:46:22 +0000 (0:00:00.347) 0:00:16.791 ******** 2026-01-09 00:57:56.239209 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.239213 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.239217 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.239221 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.239225 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.239229 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.239232 | orchestrator | 2026-01-09 00:57:56.239236 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-09 00:57:56.239240 | orchestrator | Friday 09 January 2026 00:46:24 +0000 (0:00:02.111) 0:00:18.902 ******** 2026-01-09 00:57:56.239244 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-09 00:57:56.239248 | orchestrator | 2026-01-09 00:57:56.239252 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-09 00:57:56.239257 | orchestrator | Friday 09 January 2026 00:46:25 +0000 (0:00:01.074) 0:00:19.977 ******** 2026-01-09 00:57:56.239261 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239264 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.239268 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.239272 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.239276 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.239279 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.239283 | orchestrator | 2026-01-09 00:57:56.239287 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-09 00:57:56.239291 | orchestrator | Friday 09 January 2026 00:46:26 +0000 (0:00:01.328) 0:00:21.305 ******** 2026-01-09 00:57:56.239295 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239299 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.239319 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.239324 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.239327 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.239331 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.239335 | orchestrator | 2026-01-09 00:57:56.239339 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-09 00:57:56.239342 | orchestrator | Friday 09 January 2026 00:46:28 +0000 (0:00:01.434) 0:00:22.740 ******** 2026-01-09 00:57:56.239346 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239350 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.239354 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.239358 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.239361 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.239365 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.239369 | orchestrator | 2026-01-09 00:57:56.239373 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-09 00:57:56.239376 | orchestrator | Friday 09 January 2026 00:46:29 +0000 (0:00:01.308) 0:00:24.049 ******** 2026-01-09 00:57:56.239380 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239384 | orchestrator | 2026-01-09 00:57:56.239388 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-09 00:57:56.239392 | orchestrator | Friday 09 January 2026 00:46:29 +0000 (0:00:00.139) 0:00:24.188 ******** 2026-01-09 00:57:56.239396 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239399 | orchestrator | 2026-01-09 00:57:56.239409 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-09 00:57:56.239544 | orchestrator | Friday 09 January 2026 00:46:30 +0000 (0:00:00.515) 0:00:24.703 ******** 2026-01-09 00:57:56.239560 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239564 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.239568 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.239592 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.239596 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.239605 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.239610 | orchestrator | 2026-01-09 00:57:56.239614 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-09 00:57:56.239617 | orchestrator | Friday 09 January 2026 00:46:31 +0000 (0:00:01.426) 0:00:26.130 ******** 2026-01-09 00:57:56.239621 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239625 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.239628 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.239632 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.239636 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.239640 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.239726 | orchestrator | 2026-01-09 00:57:56.239730 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-09 00:57:56.239734 | orchestrator | Friday 09 January 2026 00:46:33 +0000 (0:00:01.625) 0:00:27.756 ******** 2026-01-09 00:57:56.239738 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239742 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.239746 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.239750 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.239754 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.239757 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.239761 | orchestrator | 2026-01-09 00:57:56.239765 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-09 00:57:56.239769 | orchestrator | Friday 09 January 2026 00:46:34 +0000 (0:00:01.313) 0:00:29.069 ******** 2026-01-09 00:57:56.239773 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239776 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.239780 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.239784 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.239788 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.239791 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.239795 | orchestrator | 2026-01-09 00:57:56.239799 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-09 00:57:56.239803 | orchestrator | Friday 09 January 2026 00:46:36 +0000 (0:00:01.653) 0:00:30.722 ******** 2026-01-09 00:57:56.239807 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239810 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.239814 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.239818 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.239821 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.239825 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.239829 | orchestrator | 2026-01-09 00:57:56.239833 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-09 00:57:56.239837 | orchestrator | Friday 09 January 2026 00:46:37 +0000 (0:00:00.637) 0:00:31.359 ******** 2026-01-09 00:57:56.239840 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239844 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.239848 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.239852 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.239856 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.239859 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.239863 | orchestrator | 2026-01-09 00:57:56.239867 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-09 00:57:56.239871 | orchestrator | Friday 09 January 2026 00:46:37 +0000 (0:00:00.885) 0:00:32.245 ******** 2026-01-09 00:57:56.239881 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.239884 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.239903 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.239907 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.239911 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.239915 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.239919 | orchestrator | 2026-01-09 00:57:56.239923 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-09 00:57:56.239926 | orchestrator | Friday 09 January 2026 00:46:38 +0000 (0:00:00.973) 0:00:33.218 ******** 2026-01-09 00:57:56.239932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2edbad7c--3e58--5742--8752--3a5bd5d561b5-osd--block--2edbad7c--3e58--5742--8752--3a5bd5d561b5', 'dm-uuid-LVM-BCvLFOrl5lTOIzhIOTqbYHvqKnkSItb99spLEMqIKlY2qQg7ER6TnRnPC3SiFtva'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.239938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--209c90a3--928e--55d9--9ec8--b900c012dcc3-osd--block--209c90a3--928e--55d9--9ec8--b900c012dcc3', 'dm-uuid-LVM-9xt4N27rbyxiupZQkedE4Lk7OpX1MspAm7gSjGxDsIKxpFEw37qNzsGTpQpKqU7Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.239971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.239978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.239983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.239987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8cf949ba--669c--5e80--aece--22faa35a4e96-osd--block--8cf949ba--669c--5e80--aece--22faa35a4e96', 'dm-uuid-LVM-d5EyZdToMQeCXu6Icc9w5GoEp2mvAXmxP4fK5ixFOffR83oVAWdKmYQD2rNVf4DY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.239991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.239998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--827da1a7--5d25--503a--baf6--83b57b40e5ca-osd--block--827da1a7--5d25--503a--baf6--83b57b40e5ca', 'dm-uuid-LVM-HpMF01jbYy44XkbQuKpsZ1d1GxpiAuMq4g8mOt1Py7W7M84xblXA7mVX4oxRUSOF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11533966--1bdf--5daf--a468--949db0b9bc1b-osd--block--11533966--1bdf--5daf--a468--949db0b9bc1b', 'dm-uuid-LVM-iOiF3VRJTfHYsD4EY7pYghHTnllcEYUdFPvIOoX9xhlaY3x7oSbqXnde4RTHI0TL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f-osd--block--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f', 'dm-uuid-LVM-AXAdqHL7tb5KEAxwKMqI4uxprAFMOl2FTzl3GE82y70MD6pfqkTzrRzujnEer9HR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part1', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part14', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part15', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part16', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.240112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2edbad7c--3e58--5742--8752--3a5bd5d561b5-osd--block--2edbad7c--3e58--5742--8752--3a5bd5d561b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4i4uff-dwTm-Wsud-EMsq-258C-J0Jy-I6xjCC', 'scsi-0QEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e', 'scsi-SQEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.240128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.240136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--209c90a3--928e--55d9--9ec8--b900c012dcc3-osd--block--209c90a3--928e--55d9--9ec8--b900c012dcc3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AGGjM5-Oxx1-YEh6-MK2c-18t6-8dd6-hTAbHx', 'scsi-0QEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541', 'scsi-SQEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795', 'scsi-SQEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8cf949ba--669c--5e80--aece--22faa35a4e96-osd--block--8cf949ba--669c--5e80--aece--22faa35a4e96'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uNjwNA-8Yv6-8IKm-DGuK-tYjh-Z96L-T7HQ2l', 'scsi-0QEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766', 'scsi-SQEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--827da1a7--5d25--503a--baf6--83b57b40e5ca-osd--block--827da1a7--5d25--503a--baf6--83b57b40e5ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1fSfwk-bcE2-7Eks-1N7R-H6PK-oxLX-5C7l9u', 'scsi-0QEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8', 'scsi-SQEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d', 'scsi-SQEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--11533966--1bdf--5daf--a468--949db0b9bc1b-osd--block--11533966--1bdf--5daf--a468--949db0b9bc1b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mzD4GG-1RW7-GJRY-B41N-umfk-1UZC-FKsaF6', 'scsi-0QEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88', 'scsi-SQEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f-osd--block--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u55jh8-kpjC-Mvdc-qxrj-8QBS-XbFd-qUxXUi', 'scsi-0QEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338', 'scsi-SQEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595', 'scsi-SQEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241164 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.241168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a', 'scsi-SQEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part14', 'scsi-SQEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part15', 'scsi-SQEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part16', 'scsi-SQEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241198 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.241202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241242 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.241246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241250 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.241268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf', 'scsi-SQEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241300 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.241304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 00:57:56.241363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d', 'scsi-SQEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part1', 'scsi-SQEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part14', 'scsi-SQEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part15', 'scsi-SQEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part16', 'scsi-SQEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 00:57:56.241380 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.241384 | orchestrator | 2026-01-09 00:57:56.241388 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-09 00:57:56.241393 | orchestrator | Friday 09 January 2026 00:46:42 +0000 (0:00:03.300) 0:00:36.520 ******** 2026-01-09 00:57:56.241397 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8cf949ba--669c--5e80--aece--22faa35a4e96-osd--block--8cf949ba--669c--5e80--aece--22faa35a4e96', 'dm-uuid-LVM-d5EyZdToMQeCXu6Icc9w5GoEp2mvAXmxP4fK5ixFOffR83oVAWdKmYQD2rNVf4DY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241402 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--827da1a7--5d25--503a--baf6--83b57b40e5ca-osd--block--827da1a7--5d25--503a--baf6--83b57b40e5ca', 'dm-uuid-LVM-HpMF01jbYy44XkbQuKpsZ1d1GxpiAuMq4g8mOt1Py7W7M84xblXA7mVX4oxRUSOF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2edbad7c--3e58--5742--8752--3a5bd5d561b5-osd--block--2edbad7c--3e58--5742--8752--3a5bd5d561b5', 'dm-uuid-LVM-BCvLFOrl5lTOIzhIOTqbYHvqKnkSItb99spLEMqIKlY2qQg7ER6TnRnPC3SiFtva'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241468 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241489 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241496 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--209c90a3--928e--55d9--9ec8--b900c012dcc3-osd--block--209c90a3--928e--55d9--9ec8--b900c012dcc3', 'dm-uuid-LVM-9xt4N27rbyxiupZQkedE4Lk7OpX1MspAm7gSjGxDsIKxpFEw37qNzsGTpQpKqU7Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241503 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241513 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241517 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241540 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241550 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241554 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241562 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241602 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part1', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part14', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part15', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part16', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241637 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2edbad7c--3e58--5742--8752--3a5bd5d561b5-osd--block--2edbad7c--3e58--5742--8752--3a5bd5d561b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4i4uff-dwTm-Wsud-EMsq-258C-J0Jy-I6xjCC', 'scsi-0QEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e', 'scsi-SQEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241648 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11533966--1bdf--5daf--a468--949db0b9bc1b-osd--block--11533966--1bdf--5daf--a468--949db0b9bc1b', 'dm-uuid-LVM-iOiF3VRJTfHYsD4EY7pYghHTnllcEYUdFPvIOoX9xhlaY3x7oSbqXnde4RTHI0TL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f-osd--block--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f', 'dm-uuid-LVM-AXAdqHL7tb5KEAxwKMqI4uxprAFMOl2FTzl3GE82y70MD6pfqkTzrRzujnEer9HR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241697 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241702 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241711 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--209c90a3--928e--55d9--9ec8--b900c012dcc3-osd--block--209c90a3--928e--55d9--9ec8--b900c012dcc3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AGGjM5-Oxx1-YEh6-MK2c-18t6-8dd6-hTAbHx', 'scsi-0QEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541', 'scsi-SQEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241719 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795', 'scsi-SQEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241735 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241740 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241753 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8cf949ba--669c--5e80--aece--22faa35a4e96-osd--block--8cf949ba--669c--5e80--aece--22faa35a4e96'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uNjwNA-8Yv6-8IKm-DGuK-tYjh-Z96L-T7HQ2l', 'scsi-0QEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766', 'scsi-SQEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241758 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--827da1a7--5d25--503a--baf6--83b57b40e5ca-osd--block--827da1a7--5d25--503a--baf6--83b57b40e5ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1fSfwk-bcE2-7Eks-1N7R-H6PK-oxLX-5C7l9u', 'scsi-0QEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8', 'scsi-SQEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241775 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241780 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241790 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d', 'scsi-SQEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241795 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241799 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241804 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241813 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241827 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241832 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241837 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241842 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241851 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.241855 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242141 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242156 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a', 'scsi-SQEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part14', 'scsi-SQEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part15', 'scsi-SQEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part16', 'scsi-SQEMU_QEMU_HARDDISK_a9326bfb-8d23-4106-9716-592566db0c6a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242174 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--11533966--1bdf--5daf--a468--949db0b9bc1b-osd--block--11533966--1bdf--5daf--a468--949db0b9bc1b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mzD4GG-1RW7-GJRY-B41N-umfk-1UZC-FKsaF6', 'scsi-0QEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88', 'scsi-SQEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f-osd--block--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u55jh8-kpjC-Mvdc-qxrj-8QBS-XbFd-qUxXUi', 'scsi-0QEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338', 'scsi-SQEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242193 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.242197 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595', 'scsi-SQEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242206 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242211 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242215 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242219 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242227 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242231 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242235 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.242238 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.242242 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242252 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242257 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.242261 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242265 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf', 'scsi-SQEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d2d4152-79ed-4225-a335-fd9605d1d2cf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242275 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242282 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.242286 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242290 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242294 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242301 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242305 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242309 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242319 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242323 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242328 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d', 'scsi-SQEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part1', 'scsi-SQEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part14', 'scsi-SQEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part15', 'scsi-SQEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part16', 'scsi-SQEMU_QEMU_HARDDISK_5392d7a2-7961-43a9-927c-41bebd27776d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242335 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 00:57:56.242339 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.242343 | orchestrator | 2026-01-09 00:57:56.242349 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-09 00:57:56.242354 | orchestrator | Friday 09 January 2026 00:46:43 +0000 (0:00:01.271) 0:00:37.792 ******** 2026-01-09 00:57:56.242358 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.242362 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.242366 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.242369 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.242373 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.242377 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.242380 | orchestrator | 2026-01-09 00:57:56.242384 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-09 00:57:56.242388 | orchestrator | Friday 09 January 2026 00:46:45 +0000 (0:00:01.645) 0:00:39.438 ******** 2026-01-09 00:57:56.242392 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.242395 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.242399 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.242403 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.242406 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.242410 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.242434 | orchestrator | 2026-01-09 00:57:56.242438 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-09 00:57:56.242446 | orchestrator | Friday 09 January 2026 00:46:46 +0000 (0:00:01.082) 0:00:40.520 ******** 2026-01-09 00:57:56.242450 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.242454 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.242457 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.242461 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.242465 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.242469 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.242473 | orchestrator | 2026-01-09 00:57:56.242476 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-09 00:57:56.242480 | orchestrator | Friday 09 January 2026 00:46:47 +0000 (0:00:01.063) 0:00:41.584 ******** 2026-01-09 00:57:56.242484 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.242488 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.242492 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.242496 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.242499 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.242503 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.242507 | orchestrator | 2026-01-09 00:57:56.242511 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-09 00:57:56.242514 | orchestrator | Friday 09 January 2026 00:46:48 +0000 (0:00:01.445) 0:00:43.029 ******** 2026-01-09 00:57:56.242518 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.242522 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.242526 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.242529 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.242533 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.242537 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.242540 | orchestrator | 2026-01-09 00:57:56.242544 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-09 00:57:56.242548 | orchestrator | Friday 09 January 2026 00:46:49 +0000 (0:00:01.083) 0:00:44.113 ******** 2026-01-09 00:57:56.242552 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.242556 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.242559 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.242613 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.242617 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.242621 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.242624 | orchestrator | 2026-01-09 00:57:56.242628 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-09 00:57:56.242632 | orchestrator | Friday 09 January 2026 00:46:50 +0000 (0:00:01.192) 0:00:45.305 ******** 2026-01-09 00:57:56.242636 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-09 00:57:56.242640 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-09 00:57:56.242644 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-09 00:57:56.242648 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-09 00:57:56.242651 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-09 00:57:56.242655 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-09 00:57:56.242712 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-09 00:57:56.242724 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-09 00:57:56.242728 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-09 00:57:56.242732 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-09 00:57:56.242735 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-09 00:57:56.242739 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-09 00:57:56.242743 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-09 00:57:56.242747 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-09 00:57:56.242751 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-09 00:57:56.242759 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-09 00:57:56.242764 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-09 00:57:56.242768 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-09 00:57:56.242772 | orchestrator | 2026-01-09 00:57:56.242777 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-09 00:57:56.242782 | orchestrator | Friday 09 January 2026 00:46:56 +0000 (0:00:05.179) 0:00:50.485 ******** 2026-01-09 00:57:56.242786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-09 00:57:56.242791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-09 00:57:56.242796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-09 00:57:56.242800 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.242805 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-09 00:57:56.242809 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-09 00:57:56.242813 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-09 00:57:56.242818 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.242822 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-09 00:57:56.242830 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-09 00:57:56.242838 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-09 00:57:56.242842 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.242847 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-09 00:57:56.242851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-09 00:57:56.242855 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-09 00:57:56.242859 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.242864 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-09 00:57:56.242868 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-09 00:57:56.242872 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-09 00:57:56.242877 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.242881 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-09 00:57:56.242886 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-09 00:57:56.242890 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-09 00:57:56.242895 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.242899 | orchestrator | 2026-01-09 00:57:56.242904 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-09 00:57:56.242908 | orchestrator | Friday 09 January 2026 00:46:56 +0000 (0:00:00.801) 0:00:51.286 ******** 2026-01-09 00:57:56.242912 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.242916 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.242921 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.242926 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.242931 | orchestrator | 2026-01-09 00:57:56.242935 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-09 00:57:56.242940 | orchestrator | Friday 09 January 2026 00:46:58 +0000 (0:00:01.087) 0:00:52.374 ******** 2026-01-09 00:57:56.242944 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.242948 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.242953 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.242957 | orchestrator | 2026-01-09 00:57:56.242961 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-09 00:57:56.242966 | orchestrator | Friday 09 January 2026 00:46:58 +0000 (0:00:00.597) 0:00:52.971 ******** 2026-01-09 00:57:56.242970 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.242974 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.242982 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.242987 | orchestrator | 2026-01-09 00:57:56.242991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-09 00:57:56.242995 | orchestrator | Friday 09 January 2026 00:46:59 +0000 (0:00:00.516) 0:00:53.488 ******** 2026-01-09 00:57:56.243000 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243004 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.243009 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.243013 | orchestrator | 2026-01-09 00:57:56.243017 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-09 00:57:56.243021 | orchestrator | Friday 09 January 2026 00:46:59 +0000 (0:00:00.703) 0:00:54.192 ******** 2026-01-09 00:57:56.243026 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243030 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243034 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243039 | orchestrator | 2026-01-09 00:57:56.243043 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-09 00:57:56.243048 | orchestrator | Friday 09 January 2026 00:47:00 +0000 (0:00:00.540) 0:00:54.732 ******** 2026-01-09 00:57:56.243052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.243057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.243061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.243065 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243070 | orchestrator | 2026-01-09 00:57:56.243074 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-09 00:57:56.243079 | orchestrator | Friday 09 January 2026 00:47:00 +0000 (0:00:00.410) 0:00:55.143 ******** 2026-01-09 00:57:56.243083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.243087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.243092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.243096 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243100 | orchestrator | 2026-01-09 00:57:56.243105 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-09 00:57:56.243109 | orchestrator | Friday 09 January 2026 00:47:01 +0000 (0:00:00.514) 0:00:55.657 ******** 2026-01-09 00:57:56.243114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.243118 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.243122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.243127 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243132 | orchestrator | 2026-01-09 00:57:56.243136 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-09 00:57:56.243141 | orchestrator | Friday 09 January 2026 00:47:01 +0000 (0:00:00.353) 0:00:56.011 ******** 2026-01-09 00:57:56.243144 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243148 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243152 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243156 | orchestrator | 2026-01-09 00:57:56.243159 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-09 00:57:56.243163 | orchestrator | Friday 09 January 2026 00:47:02 +0000 (0:00:00.480) 0:00:56.491 ******** 2026-01-09 00:57:56.243167 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-09 00:57:56.243171 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-09 00:57:56.243177 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-09 00:57:56.243188 | orchestrator | 2026-01-09 00:57:56.243200 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-09 00:57:56.243204 | orchestrator | Friday 09 January 2026 00:47:03 +0000 (0:00:01.412) 0:00:57.904 ******** 2026-01-09 00:57:56.243208 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-09 00:57:56.243212 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-09 00:57:56.243220 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-09 00:57:56.243223 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-09 00:57:56.243227 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-09 00:57:56.243231 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-09 00:57:56.243235 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-09 00:57:56.243238 | orchestrator | 2026-01-09 00:57:56.243242 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-09 00:57:56.243246 | orchestrator | Friday 09 January 2026 00:47:04 +0000 (0:00:00.771) 0:00:58.676 ******** 2026-01-09 00:57:56.243250 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-09 00:57:56.243253 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-09 00:57:56.243257 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-09 00:57:56.243261 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-09 00:57:56.243265 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-09 00:57:56.243268 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-09 00:57:56.243272 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-09 00:57:56.243276 | orchestrator | 2026-01-09 00:57:56.243280 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-09 00:57:56.243284 | orchestrator | Friday 09 January 2026 00:47:06 +0000 (0:00:01.779) 0:01:00.455 ******** 2026-01-09 00:57:56.243288 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.243294 | orchestrator | 2026-01-09 00:57:56.243298 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-09 00:57:56.243301 | orchestrator | Friday 09 January 2026 00:47:07 +0000 (0:00:01.038) 0:01:01.493 ******** 2026-01-09 00:57:56.243305 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.243309 | orchestrator | 2026-01-09 00:57:56.243313 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-09 00:57:56.243316 | orchestrator | Friday 09 January 2026 00:47:08 +0000 (0:00:01.088) 0:01:02.582 ******** 2026-01-09 00:57:56.243320 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243324 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.243328 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.243331 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.243335 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.243339 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.243343 | orchestrator | 2026-01-09 00:57:56.243347 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-09 00:57:56.243350 | orchestrator | Friday 09 January 2026 00:47:09 +0000 (0:00:01.543) 0:01:04.126 ******** 2026-01-09 00:57:56.243354 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.243358 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.243362 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.243365 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243369 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243373 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243377 | orchestrator | 2026-01-09 00:57:56.243380 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-09 00:57:56.243384 | orchestrator | Friday 09 January 2026 00:47:10 +0000 (0:00:00.986) 0:01:05.113 ******** 2026-01-09 00:57:56.243388 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.243395 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243398 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243402 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.243406 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243410 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.243428 | orchestrator | 2026-01-09 00:57:56.243432 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-09 00:57:56.243436 | orchestrator | Friday 09 January 2026 00:47:11 +0000 (0:00:00.859) 0:01:05.972 ******** 2026-01-09 00:57:56.243440 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.243443 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.243447 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243451 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243455 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.243458 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243462 | orchestrator | 2026-01-09 00:57:56.243466 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-09 00:57:56.243470 | orchestrator | Friday 09 January 2026 00:47:12 +0000 (0:00:00.866) 0:01:06.839 ******** 2026-01-09 00:57:56.243474 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243477 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.243481 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.243485 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.243488 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.243495 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.243499 | orchestrator | 2026-01-09 00:57:56.243505 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-09 00:57:56.243509 | orchestrator | Friday 09 January 2026 00:47:14 +0000 (0:00:01.698) 0:01:08.537 ******** 2026-01-09 00:57:56.243512 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243516 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.243520 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.243524 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.243527 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.243531 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.243535 | orchestrator | 2026-01-09 00:57:56.243539 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-09 00:57:56.243542 | orchestrator | Friday 09 January 2026 00:47:15 +0000 (0:00:01.113) 0:01:09.651 ******** 2026-01-09 00:57:56.243546 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243550 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.243554 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.243557 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.243561 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.243565 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.243568 | orchestrator | 2026-01-09 00:57:56.243572 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-09 00:57:56.243576 | orchestrator | Friday 09 January 2026 00:47:16 +0000 (0:00:01.128) 0:01:10.780 ******** 2026-01-09 00:57:56.243580 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243583 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243587 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243591 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.243594 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.243598 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.243602 | orchestrator | 2026-01-09 00:57:56.243606 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-09 00:57:56.243609 | orchestrator | Friday 09 January 2026 00:47:17 +0000 (0:00:01.384) 0:01:12.165 ******** 2026-01-09 00:57:56.243613 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243617 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243620 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243624 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.243628 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.243636 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.243640 | orchestrator | 2026-01-09 00:57:56.243644 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-09 00:57:56.243647 | orchestrator | Friday 09 January 2026 00:47:19 +0000 (0:00:02.145) 0:01:14.310 ******** 2026-01-09 00:57:56.243651 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243655 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.243659 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.243662 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.243666 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.243670 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.243674 | orchestrator | 2026-01-09 00:57:56.243677 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-09 00:57:56.243681 | orchestrator | Friday 09 January 2026 00:47:21 +0000 (0:00:01.061) 0:01:15.372 ******** 2026-01-09 00:57:56.243685 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243689 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.243692 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.243696 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.243700 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.243704 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.243707 | orchestrator | 2026-01-09 00:57:56.243711 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-09 00:57:56.243715 | orchestrator | Friday 09 January 2026 00:47:22 +0000 (0:00:01.014) 0:01:16.386 ******** 2026-01-09 00:57:56.243719 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243722 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243726 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243730 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.243734 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.243737 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.243741 | orchestrator | 2026-01-09 00:57:56.243745 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-09 00:57:56.243749 | orchestrator | Friday 09 January 2026 00:47:22 +0000 (0:00:00.667) 0:01:17.054 ******** 2026-01-09 00:57:56.243753 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243756 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243760 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243764 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.243767 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.243771 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.243775 | orchestrator | 2026-01-09 00:57:56.243779 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-09 00:57:56.243782 | orchestrator | Friday 09 January 2026 00:47:23 +0000 (0:00:00.716) 0:01:17.770 ******** 2026-01-09 00:57:56.243786 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243790 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243794 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243797 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.243801 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.243805 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.243809 | orchestrator | 2026-01-09 00:57:56.243812 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-09 00:57:56.243816 | orchestrator | Friday 09 January 2026 00:47:24 +0000 (0:00:00.621) 0:01:18.391 ******** 2026-01-09 00:57:56.243820 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243824 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.243830 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.243836 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.243841 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.243845 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.243849 | orchestrator | 2026-01-09 00:57:56.243853 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-09 00:57:56.243856 | orchestrator | Friday 09 January 2026 00:47:25 +0000 (0:00:01.261) 0:01:19.653 ******** 2026-01-09 00:57:56.243864 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243868 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.243871 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.243875 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.243881 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.243888 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.243892 | orchestrator | 2026-01-09 00:57:56.243896 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-09 00:57:56.243900 | orchestrator | Friday 09 January 2026 00:47:26 +0000 (0:00:00.885) 0:01:20.538 ******** 2026-01-09 00:57:56.243904 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.243907 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.243911 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.243915 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.243918 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.243922 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.243926 | orchestrator | 2026-01-09 00:57:56.243930 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-09 00:57:56.243934 | orchestrator | Friday 09 January 2026 00:47:27 +0000 (0:00:00.915) 0:01:21.453 ******** 2026-01-09 00:57:56.243937 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243941 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243945 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243948 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.243952 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.243956 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.243960 | orchestrator | 2026-01-09 00:57:56.243964 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-09 00:57:56.243967 | orchestrator | Friday 09 January 2026 00:47:28 +0000 (0:00:00.888) 0:01:22.342 ******** 2026-01-09 00:57:56.243971 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.243975 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.243979 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.243982 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.243986 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.243990 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.243994 | orchestrator | 2026-01-09 00:57:56.243997 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-09 00:57:56.244001 | orchestrator | Friday 09 January 2026 00:47:29 +0000 (0:00:01.315) 0:01:23.658 ******** 2026-01-09 00:57:56.244005 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.244009 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.244012 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.244016 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.244020 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.244024 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.244027 | orchestrator | 2026-01-09 00:57:56.244031 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-09 00:57:56.244035 | orchestrator | Friday 09 January 2026 00:47:31 +0000 (0:00:02.259) 0:01:25.917 ******** 2026-01-09 00:57:56.244039 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.244043 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.244046 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.244050 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.244054 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.244058 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.244061 | orchestrator | 2026-01-09 00:57:56.244065 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-09 00:57:56.244069 | orchestrator | Friday 09 January 2026 00:47:34 +0000 (0:00:03.379) 0:01:29.296 ******** 2026-01-09 00:57:56.244073 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.244080 | orchestrator | 2026-01-09 00:57:56.244084 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-09 00:57:56.244087 | orchestrator | Friday 09 January 2026 00:47:36 +0000 (0:00:01.246) 0:01:30.543 ******** 2026-01-09 00:57:56.244091 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244095 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244099 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244103 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244106 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244110 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244114 | orchestrator | 2026-01-09 00:57:56.244117 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-09 00:57:56.244121 | orchestrator | Friday 09 January 2026 00:47:36 +0000 (0:00:00.597) 0:01:31.140 ******** 2026-01-09 00:57:56.244125 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244129 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244132 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244136 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244140 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244144 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244147 | orchestrator | 2026-01-09 00:57:56.244151 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-09 00:57:56.244155 | orchestrator | Friday 09 January 2026 00:47:38 +0000 (0:00:01.337) 0:01:32.477 ******** 2026-01-09 00:57:56.244159 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-09 00:57:56.244163 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-09 00:57:56.244166 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-09 00:57:56.244170 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-09 00:57:56.244174 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-09 00:57:56.244178 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-09 00:57:56.244181 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-09 00:57:56.244185 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-09 00:57:56.244189 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-09 00:57:56.244193 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-09 00:57:56.244203 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-09 00:57:56.244207 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-09 00:57:56.244210 | orchestrator | 2026-01-09 00:57:56.244214 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-09 00:57:56.244218 | orchestrator | Friday 09 January 2026 00:47:39 +0000 (0:00:01.306) 0:01:33.784 ******** 2026-01-09 00:57:56.244222 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.244226 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.244230 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.244233 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.244237 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.244241 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.244245 | orchestrator | 2026-01-09 00:57:56.244249 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-09 00:57:56.244252 | orchestrator | Friday 09 January 2026 00:47:40 +0000 (0:00:01.282) 0:01:35.066 ******** 2026-01-09 00:57:56.244256 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244260 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244264 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244268 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244275 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244279 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244283 | orchestrator | 2026-01-09 00:57:56.244286 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-09 00:57:56.244290 | orchestrator | Friday 09 January 2026 00:47:41 +0000 (0:00:00.609) 0:01:35.675 ******** 2026-01-09 00:57:56.244294 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244298 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244301 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244305 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244309 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244313 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244317 | orchestrator | 2026-01-09 00:57:56.244320 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-09 00:57:56.244324 | orchestrator | Friday 09 January 2026 00:47:42 +0000 (0:00:00.869) 0:01:36.545 ******** 2026-01-09 00:57:56.244328 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244332 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244335 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244339 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244343 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244347 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244350 | orchestrator | 2026-01-09 00:57:56.244354 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-09 00:57:56.244358 | orchestrator | Friday 09 January 2026 00:47:42 +0000 (0:00:00.535) 0:01:37.081 ******** 2026-01-09 00:57:56.244362 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.244366 | orchestrator | 2026-01-09 00:57:56.244370 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-09 00:57:56.244374 | orchestrator | Friday 09 January 2026 00:47:44 +0000 (0:00:01.261) 0:01:38.342 ******** 2026-01-09 00:57:56.244377 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.244381 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.244385 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.244389 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.244393 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.244396 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.244400 | orchestrator | 2026-01-09 00:57:56.244404 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-09 00:57:56.244408 | orchestrator | Friday 09 January 2026 00:48:39 +0000 (0:00:55.865) 0:02:34.208 ******** 2026-01-09 00:57:56.244428 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-09 00:57:56.244433 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-09 00:57:56.244437 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-09 00:57:56.244441 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244444 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-09 00:57:56.244448 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-09 00:57:56.244452 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-09 00:57:56.244456 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244460 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-09 00:57:56.244463 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-09 00:57:56.244467 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-09 00:57:56.244471 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244475 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-09 00:57:56.244484 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-09 00:57:56.244487 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-09 00:57:56.244491 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244495 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-09 00:57:56.244499 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-09 00:57:56.244503 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-09 00:57:56.244506 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244513 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-09 00:57:56.244519 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-09 00:57:56.244523 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-09 00:57:56.244527 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244531 | orchestrator | 2026-01-09 00:57:56.244535 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-09 00:57:56.244538 | orchestrator | Friday 09 January 2026 00:48:40 +0000 (0:00:00.937) 0:02:35.145 ******** 2026-01-09 00:57:56.244542 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244546 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244550 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244553 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244557 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244561 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244565 | orchestrator | 2026-01-09 00:57:56.244568 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-09 00:57:56.244572 | orchestrator | Friday 09 January 2026 00:48:41 +0000 (0:00:00.914) 0:02:36.060 ******** 2026-01-09 00:57:56.244576 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244580 | orchestrator | 2026-01-09 00:57:56.244583 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-09 00:57:56.244587 | orchestrator | Friday 09 January 2026 00:48:41 +0000 (0:00:00.170) 0:02:36.230 ******** 2026-01-09 00:57:56.244591 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244595 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244598 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244602 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244606 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244612 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244618 | orchestrator | 2026-01-09 00:57:56.244623 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-09 00:57:56.244630 | orchestrator | Friday 09 January 2026 00:48:42 +0000 (0:00:00.854) 0:02:37.085 ******** 2026-01-09 00:57:56.244636 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244642 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244648 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244654 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244659 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244665 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244671 | orchestrator | 2026-01-09 00:57:56.244677 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-09 00:57:56.244683 | orchestrator | Friday 09 January 2026 00:48:43 +0000 (0:00:01.063) 0:02:38.148 ******** 2026-01-09 00:57:56.244689 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244694 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244700 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244706 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244711 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244716 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244722 | orchestrator | 2026-01-09 00:57:56.244728 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-09 00:57:56.244739 | orchestrator | Friday 09 January 2026 00:48:44 +0000 (0:00:01.052) 0:02:39.201 ******** 2026-01-09 00:57:56.244744 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.244750 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.244756 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.244762 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.244767 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.244773 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.244779 | orchestrator | 2026-01-09 00:57:56.244785 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-09 00:57:56.244791 | orchestrator | Friday 09 January 2026 00:48:47 +0000 (0:00:02.491) 0:02:41.693 ******** 2026-01-09 00:57:56.244797 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.244802 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.244808 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.244814 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.244821 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.244826 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.244832 | orchestrator | 2026-01-09 00:57:56.244838 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-09 00:57:56.244843 | orchestrator | Friday 09 January 2026 00:48:48 +0000 (0:00:00.864) 0:02:42.558 ******** 2026-01-09 00:57:56.244849 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.244856 | orchestrator | 2026-01-09 00:57:56.244862 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-09 00:57:56.244867 | orchestrator | Friday 09 January 2026 00:48:49 +0000 (0:00:01.588) 0:02:44.146 ******** 2026-01-09 00:57:56.244873 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244879 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244885 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244891 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244897 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244903 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244907 | orchestrator | 2026-01-09 00:57:56.244911 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-09 00:57:56.244914 | orchestrator | Friday 09 January 2026 00:48:51 +0000 (0:00:01.299) 0:02:45.445 ******** 2026-01-09 00:57:56.244918 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244922 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244926 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244932 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244938 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244943 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244949 | orchestrator | 2026-01-09 00:57:56.244954 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-09 00:57:56.244958 | orchestrator | Friday 09 January 2026 00:48:51 +0000 (0:00:00.828) 0:02:46.274 ******** 2026-01-09 00:57:56.244962 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.244965 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.244973 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.244976 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.244984 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.244988 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.244992 | orchestrator | 2026-01-09 00:57:56.244995 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-09 00:57:56.244999 | orchestrator | Friday 09 January 2026 00:48:53 +0000 (0:00:01.392) 0:02:47.666 ******** 2026-01-09 00:57:56.245003 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.245007 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.245010 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.245014 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.245025 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.245029 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.245033 | orchestrator | 2026-01-09 00:57:56.245037 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-09 00:57:56.245040 | orchestrator | Friday 09 January 2026 00:48:54 +0000 (0:00:00.971) 0:02:48.638 ******** 2026-01-09 00:57:56.245044 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.245048 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.245051 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.245055 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.245059 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.245063 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.245066 | orchestrator | 2026-01-09 00:57:56.245070 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-09 00:57:56.245074 | orchestrator | Friday 09 January 2026 00:48:55 +0000 (0:00:00.788) 0:02:49.426 ******** 2026-01-09 00:57:56.245077 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.245081 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.245085 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.245089 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.245092 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.245096 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.245100 | orchestrator | 2026-01-09 00:57:56.245104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-09 00:57:56.245107 | orchestrator | Friday 09 January 2026 00:48:55 +0000 (0:00:00.700) 0:02:50.127 ******** 2026-01-09 00:57:56.245111 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.245115 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.245118 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.245122 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.245126 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.245130 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.245133 | orchestrator | 2026-01-09 00:57:56.245137 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-09 00:57:56.245141 | orchestrator | Friday 09 January 2026 00:48:56 +0000 (0:00:00.775) 0:02:50.903 ******** 2026-01-09 00:57:56.245145 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.245148 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.245152 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.245156 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.245159 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.245163 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.245167 | orchestrator | 2026-01-09 00:57:56.245171 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-09 00:57:56.245174 | orchestrator | Friday 09 January 2026 00:48:57 +0000 (0:00:00.595) 0:02:51.498 ******** 2026-01-09 00:57:56.245178 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.245182 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.245186 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.245189 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.245193 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.245197 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.245200 | orchestrator | 2026-01-09 00:57:56.245204 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-09 00:57:56.245208 | orchestrator | Friday 09 January 2026 00:48:58 +0000 (0:00:01.287) 0:02:52.786 ******** 2026-01-09 00:57:56.245212 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.245216 | orchestrator | 2026-01-09 00:57:56.245220 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-09 00:57:56.245223 | orchestrator | Friday 09 January 2026 00:49:00 +0000 (0:00:01.826) 0:02:54.612 ******** 2026-01-09 00:57:56.245227 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-09 00:57:56.245234 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-09 00:57:56.245238 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-09 00:57:56.245242 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-09 00:57:56.245246 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-09 00:57:56.245249 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-09 00:57:56.245253 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-09 00:57:56.245257 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-09 00:57:56.245261 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-09 00:57:56.245265 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-09 00:57:56.245268 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-09 00:57:56.245272 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-09 00:57:56.245276 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-09 00:57:56.245280 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-09 00:57:56.245283 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-09 00:57:56.245287 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-09 00:57:56.245291 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-09 00:57:56.245295 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-09 00:57:56.245301 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-09 00:57:56.245307 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-09 00:57:56.245311 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-09 00:57:56.245315 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-09 00:57:56.245319 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-09 00:57:56.245322 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-09 00:57:56.245326 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-09 00:57:56.245330 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-09 00:57:56.245334 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-09 00:57:56.245337 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-09 00:57:56.245341 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-09 00:57:56.245345 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-09 00:57:56.245348 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-09 00:57:56.245352 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-09 00:57:56.245356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-09 00:57:56.245360 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-09 00:57:56.245363 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-09 00:57:56.245367 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-09 00:57:56.245371 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-09 00:57:56.245375 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-09 00:57:56.245378 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-09 00:57:56.245382 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-09 00:57:56.245386 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-09 00:57:56.245390 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-09 00:57:56.245393 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-09 00:57:56.245397 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-09 00:57:56.245401 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-09 00:57:56.245408 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-09 00:57:56.245431 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-09 00:57:56.245436 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-09 00:57:56.245440 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-09 00:57:56.245444 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-09 00:57:56.245447 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-09 00:57:56.245451 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-09 00:57:56.245455 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-09 00:57:56.245459 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-09 00:57:56.245462 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-09 00:57:56.245466 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-09 00:57:56.245470 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-09 00:57:56.245474 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-09 00:57:56.245477 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-09 00:57:56.245481 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-09 00:57:56.245485 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-09 00:57:56.245489 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-09 00:57:56.245492 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-09 00:57:56.245496 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-09 00:57:56.245500 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-09 00:57:56.245504 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-09 00:57:56.245508 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-09 00:57:56.245511 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-09 00:57:56.245515 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-09 00:57:56.245519 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-09 00:57:56.245522 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-09 00:57:56.245526 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-09 00:57:56.245530 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-09 00:57:56.245534 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-09 00:57:56.245538 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-09 00:57:56.245541 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-09 00:57:56.245545 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-09 00:57:56.245556 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-09 00:57:56.245560 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-09 00:57:56.245563 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-09 00:57:56.245567 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-09 00:57:56.245571 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-09 00:57:56.245575 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-09 00:57:56.245578 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-09 00:57:56.245582 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-09 00:57:56.245589 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-09 00:57:56.245593 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-09 00:57:56.245597 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-09 00:57:56.245601 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-09 00:57:56.245605 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-09 00:57:56.245608 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-09 00:57:56.245612 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-09 00:57:56.245616 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-09 00:57:56.245620 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-09 00:57:56.245623 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-09 00:57:56.245627 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-09 00:57:56.245631 | orchestrator | 2026-01-09 00:57:56.245635 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-09 00:57:56.245638 | orchestrator | Friday 09 January 2026 00:49:07 +0000 (0:00:07.528) 0:03:02.140 ******** 2026-01-09 00:57:56.245642 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.245646 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.245650 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.245654 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.245658 | orchestrator | 2026-01-09 00:57:56.245662 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-09 00:57:56.245665 | orchestrator | Friday 09 January 2026 00:49:08 +0000 (0:00:00.761) 0:03:02.902 ******** 2026-01-09 00:57:56.245669 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.245674 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.245677 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.245681 | orchestrator | 2026-01-09 00:57:56.245685 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-09 00:57:56.245689 | orchestrator | Friday 09 January 2026 00:49:09 +0000 (0:00:01.014) 0:03:03.917 ******** 2026-01-09 00:57:56.245693 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.245696 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.245700 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.245704 | orchestrator | 2026-01-09 00:57:56.245708 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-09 00:57:56.245712 | orchestrator | Friday 09 January 2026 00:49:10 +0000 (0:00:01.296) 0:03:05.213 ******** 2026-01-09 00:57:56.245718 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.245724 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.245731 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.245741 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.245750 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.245755 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.245760 | orchestrator | 2026-01-09 00:57:56.245766 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-09 00:57:56.245772 | orchestrator | Friday 09 January 2026 00:49:11 +0000 (0:00:00.658) 0:03:05.872 ******** 2026-01-09 00:57:56.245783 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.245789 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.245794 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.245800 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.245806 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.245813 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.245818 | orchestrator | 2026-01-09 00:57:56.245824 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-09 00:57:56.245829 | orchestrator | Friday 09 January 2026 00:49:12 +0000 (0:00:01.129) 0:03:07.001 ******** 2026-01-09 00:57:56.245835 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.245841 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.245846 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.245852 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.245858 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.245864 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.245871 | orchestrator | 2026-01-09 00:57:56.246005 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-09 00:57:56.246038 | orchestrator | Friday 09 January 2026 00:49:13 +0000 (0:00:01.177) 0:03:08.179 ******** 2026-01-09 00:57:56.246043 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246047 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.246051 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.246055 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246059 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246063 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246067 | orchestrator | 2026-01-09 00:57:56.246071 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-09 00:57:56.246075 | orchestrator | Friday 09 January 2026 00:49:15 +0000 (0:00:01.183) 0:03:09.362 ******** 2026-01-09 00:57:56.246079 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246082 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.246086 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.246090 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246095 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246100 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246106 | orchestrator | 2026-01-09 00:57:56.246112 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-09 00:57:56.246119 | orchestrator | Friday 09 January 2026 00:49:15 +0000 (0:00:00.831) 0:03:10.194 ******** 2026-01-09 00:57:56.246125 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246131 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.246136 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.246142 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246147 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246153 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246159 | orchestrator | 2026-01-09 00:57:56.246164 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-09 00:57:56.246170 | orchestrator | Friday 09 January 2026 00:49:16 +0000 (0:00:00.955) 0:03:11.150 ******** 2026-01-09 00:57:56.246175 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246181 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.246186 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.246192 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246197 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246203 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246209 | orchestrator | 2026-01-09 00:57:56.246214 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-09 00:57:56.246220 | orchestrator | Friday 09 January 2026 00:49:17 +0000 (0:00:00.812) 0:03:11.962 ******** 2026-01-09 00:57:56.246225 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246237 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.246243 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.246248 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246254 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246259 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246265 | orchestrator | 2026-01-09 00:57:56.246270 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-09 00:57:56.246276 | orchestrator | Friday 09 January 2026 00:49:18 +0000 (0:00:01.187) 0:03:13.150 ******** 2026-01-09 00:57:56.246282 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246288 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246293 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246299 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.246304 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.246310 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.246315 | orchestrator | 2026-01-09 00:57:56.246321 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-09 00:57:56.246327 | orchestrator | Friday 09 January 2026 00:49:21 +0000 (0:00:02.899) 0:03:16.049 ******** 2026-01-09 00:57:56.246332 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.246338 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.246344 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.246350 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246355 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246361 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246367 | orchestrator | 2026-01-09 00:57:56.246372 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-09 00:57:56.246378 | orchestrator | Friday 09 January 2026 00:49:22 +0000 (0:00:01.109) 0:03:17.159 ******** 2026-01-09 00:57:56.246384 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.246390 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.246396 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.246401 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246407 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246458 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246465 | orchestrator | 2026-01-09 00:57:56.246471 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-09 00:57:56.246477 | orchestrator | Friday 09 January 2026 00:49:23 +0000 (0:00:00.759) 0:03:17.919 ******** 2026-01-09 00:57:56.246482 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246514 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.246520 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.246526 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246531 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246536 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246542 | orchestrator | 2026-01-09 00:57:56.246548 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-09 00:57:56.246554 | orchestrator | Friday 09 January 2026 00:49:24 +0000 (0:00:01.004) 0:03:18.924 ******** 2026-01-09 00:57:56.246559 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.246566 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.246572 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.246577 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246622 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246637 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246643 | orchestrator | 2026-01-09 00:57:56.246649 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-09 00:57:56.246655 | orchestrator | Friday 09 January 2026 00:49:25 +0000 (0:00:00.808) 0:03:19.732 ******** 2026-01-09 00:57:56.246672 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-09 00:57:56.246682 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-09 00:57:56.246689 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246695 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-09 00:57:56.246702 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-09 00:57:56.246709 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-09 00:57:56.246715 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-09 00:57:56.246721 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.246727 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.246734 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246740 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246746 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246753 | orchestrator | 2026-01-09 00:57:56.246759 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-09 00:57:56.246766 | orchestrator | Friday 09 January 2026 00:49:26 +0000 (0:00:01.205) 0:03:20.938 ******** 2026-01-09 00:57:56.246772 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246778 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.246784 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.246788 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246792 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246795 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246799 | orchestrator | 2026-01-09 00:57:56.246803 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-09 00:57:56.246807 | orchestrator | Friday 09 January 2026 00:49:27 +0000 (0:00:00.751) 0:03:21.690 ******** 2026-01-09 00:57:56.246811 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246815 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.246819 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.246822 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246826 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246830 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246834 | orchestrator | 2026-01-09 00:57:56.246838 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-09 00:57:56.246842 | orchestrator | Friday 09 January 2026 00:49:28 +0000 (0:00:01.083) 0:03:22.773 ******** 2026-01-09 00:57:56.246846 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246858 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.246863 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.246869 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246875 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246880 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246886 | orchestrator | 2026-01-09 00:57:56.246892 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-09 00:57:56.246897 | orchestrator | Friday 09 January 2026 00:49:29 +0000 (0:00:00.725) 0:03:23.499 ******** 2026-01-09 00:57:56.246903 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246908 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.246914 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.246919 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.246924 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.246930 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.246936 | orchestrator | 2026-01-09 00:57:56.246942 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-09 00:57:56.246977 | orchestrator | Friday 09 January 2026 00:49:30 +0000 (0:00:01.193) 0:03:24.692 ******** 2026-01-09 00:57:56.246989 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.246996 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.247002 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.247006 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.247010 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.247014 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.247018 | orchestrator | 2026-01-09 00:57:56.247021 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-09 00:57:56.247025 | orchestrator | Friday 09 January 2026 00:49:31 +0000 (0:00:00.967) 0:03:25.659 ******** 2026-01-09 00:57:56.247029 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.247033 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.247037 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.247040 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.247044 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.247048 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.247052 | orchestrator | 2026-01-09 00:57:56.247056 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-09 00:57:56.247059 | orchestrator | Friday 09 January 2026 00:49:32 +0000 (0:00:00.954) 0:03:26.613 ******** 2026-01-09 00:57:56.247063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.247067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.247071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.247075 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247079 | orchestrator | 2026-01-09 00:57:56.247082 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-09 00:57:56.247086 | orchestrator | Friday 09 January 2026 00:49:32 +0000 (0:00:00.356) 0:03:26.970 ******** 2026-01-09 00:57:56.247090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.247094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.247098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.247101 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247105 | orchestrator | 2026-01-09 00:57:56.247109 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-09 00:57:56.247113 | orchestrator | Friday 09 January 2026 00:49:33 +0000 (0:00:00.387) 0:03:27.357 ******** 2026-01-09 00:57:56.247116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.247120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.247124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.247128 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247138 | orchestrator | 2026-01-09 00:57:56.247142 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-09 00:57:56.247145 | orchestrator | Friday 09 January 2026 00:49:33 +0000 (0:00:00.403) 0:03:27.761 ******** 2026-01-09 00:57:56.247149 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.247153 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.247157 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.247161 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.247165 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.247169 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.247172 | orchestrator | 2026-01-09 00:57:56.247176 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-09 00:57:56.247180 | orchestrator | Friday 09 January 2026 00:49:34 +0000 (0:00:00.756) 0:03:28.517 ******** 2026-01-09 00:57:56.247184 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-09 00:57:56.247188 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-09 00:57:56.247192 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-09 00:57:56.247196 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-09 00:57:56.247200 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.247203 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-09 00:57:56.247207 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.247211 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-09 00:57:56.247215 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.247219 | orchestrator | 2026-01-09 00:57:56.247222 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-09 00:57:56.247226 | orchestrator | Friday 09 January 2026 00:49:36 +0000 (0:00:02.154) 0:03:30.672 ******** 2026-01-09 00:57:56.247230 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.247234 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.247238 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.247242 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.247246 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.247249 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.247253 | orchestrator | 2026-01-09 00:57:56.247257 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-09 00:57:56.247261 | orchestrator | Friday 09 January 2026 00:49:39 +0000 (0:00:02.791) 0:03:33.463 ******** 2026-01-09 00:57:56.247264 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.247268 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.247272 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.247276 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.247280 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.247283 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.247287 | orchestrator | 2026-01-09 00:57:56.247291 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-09 00:57:56.247295 | orchestrator | Friday 09 January 2026 00:49:40 +0000 (0:00:01.098) 0:03:34.562 ******** 2026-01-09 00:57:56.247298 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247302 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.247306 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.247310 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.247314 | orchestrator | 2026-01-09 00:57:56.247318 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-09 00:57:56.247334 | orchestrator | Friday 09 January 2026 00:49:41 +0000 (0:00:01.190) 0:03:35.752 ******** 2026-01-09 00:57:56.247338 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.247346 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.247350 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.247354 | orchestrator | 2026-01-09 00:57:56.247357 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-09 00:57:56.247361 | orchestrator | Friday 09 January 2026 00:49:41 +0000 (0:00:00.367) 0:03:36.119 ******** 2026-01-09 00:57:56.247368 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.247372 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.247376 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.247379 | orchestrator | 2026-01-09 00:57:56.247383 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-09 00:57:56.247387 | orchestrator | Friday 09 January 2026 00:49:43 +0000 (0:00:01.300) 0:03:37.420 ******** 2026-01-09 00:57:56.247391 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-09 00:57:56.247395 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-09 00:57:56.247398 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-09 00:57:56.247402 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.247406 | orchestrator | 2026-01-09 00:57:56.247410 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-09 00:57:56.247435 | orchestrator | Friday 09 January 2026 00:49:44 +0000 (0:00:01.155) 0:03:38.576 ******** 2026-01-09 00:57:56.247439 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.247443 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.247447 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.247451 | orchestrator | 2026-01-09 00:57:56.247455 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-09 00:57:56.247458 | orchestrator | Friday 09 January 2026 00:49:44 +0000 (0:00:00.377) 0:03:38.954 ******** 2026-01-09 00:57:56.247462 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.247466 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.247470 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.247474 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.247478 | orchestrator | 2026-01-09 00:57:56.247482 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-09 00:57:56.247485 | orchestrator | Friday 09 January 2026 00:49:45 +0000 (0:00:00.926) 0:03:39.881 ******** 2026-01-09 00:57:56.247489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.247493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.247497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.247501 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247504 | orchestrator | 2026-01-09 00:57:56.247508 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-09 00:57:56.247512 | orchestrator | Friday 09 January 2026 00:49:45 +0000 (0:00:00.348) 0:03:40.229 ******** 2026-01-09 00:57:56.247516 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247520 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.247523 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.247527 | orchestrator | 2026-01-09 00:57:56.247531 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-09 00:57:56.247535 | orchestrator | Friday 09 January 2026 00:49:46 +0000 (0:00:00.418) 0:03:40.647 ******** 2026-01-09 00:57:56.247539 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247542 | orchestrator | 2026-01-09 00:57:56.247546 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-09 00:57:56.247550 | orchestrator | Friday 09 January 2026 00:49:46 +0000 (0:00:00.275) 0:03:40.923 ******** 2026-01-09 00:57:56.247554 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247558 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.247561 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.247565 | orchestrator | 2026-01-09 00:57:56.247569 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-09 00:57:56.247573 | orchestrator | Friday 09 January 2026 00:49:46 +0000 (0:00:00.279) 0:03:41.202 ******** 2026-01-09 00:57:56.247576 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247580 | orchestrator | 2026-01-09 00:57:56.247584 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-09 00:57:56.247591 | orchestrator | Friday 09 January 2026 00:49:47 +0000 (0:00:00.187) 0:03:41.390 ******** 2026-01-09 00:57:56.247595 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247599 | orchestrator | 2026-01-09 00:57:56.247603 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-09 00:57:56.247607 | orchestrator | Friday 09 January 2026 00:49:47 +0000 (0:00:00.223) 0:03:41.613 ******** 2026-01-09 00:57:56.247610 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247614 | orchestrator | 2026-01-09 00:57:56.247618 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-09 00:57:56.247622 | orchestrator | Friday 09 January 2026 00:49:47 +0000 (0:00:00.109) 0:03:41.723 ******** 2026-01-09 00:57:56.247626 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247630 | orchestrator | 2026-01-09 00:57:56.247634 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-09 00:57:56.247638 | orchestrator | Friday 09 January 2026 00:49:47 +0000 (0:00:00.580) 0:03:42.303 ******** 2026-01-09 00:57:56.247641 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247645 | orchestrator | 2026-01-09 00:57:56.247649 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-09 00:57:56.247653 | orchestrator | Friday 09 January 2026 00:49:48 +0000 (0:00:00.214) 0:03:42.517 ******** 2026-01-09 00:57:56.247657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.247661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.247664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.247669 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247672 | orchestrator | 2026-01-09 00:57:56.247676 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-09 00:57:56.247693 | orchestrator | Friday 09 January 2026 00:49:48 +0000 (0:00:00.429) 0:03:42.946 ******** 2026-01-09 00:57:56.247700 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247704 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.247708 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.247712 | orchestrator | 2026-01-09 00:57:56.247716 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-09 00:57:56.247720 | orchestrator | Friday 09 January 2026 00:49:48 +0000 (0:00:00.289) 0:03:43.236 ******** 2026-01-09 00:57:56.247723 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247727 | orchestrator | 2026-01-09 00:57:56.247731 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-09 00:57:56.247735 | orchestrator | Friday 09 January 2026 00:49:49 +0000 (0:00:00.187) 0:03:43.424 ******** 2026-01-09 00:57:56.247739 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247743 | orchestrator | 2026-01-09 00:57:56.247747 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-09 00:57:56.247751 | orchestrator | Friday 09 January 2026 00:49:49 +0000 (0:00:00.200) 0:03:43.624 ******** 2026-01-09 00:57:56.247755 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.247761 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.247767 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.247773 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.247778 | orchestrator | 2026-01-09 00:57:56.247783 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-09 00:57:56.247789 | orchestrator | Friday 09 January 2026 00:49:50 +0000 (0:00:01.069) 0:03:44.693 ******** 2026-01-09 00:57:56.247800 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.247806 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.247812 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.247818 | orchestrator | 2026-01-09 00:57:56.247824 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-09 00:57:56.247830 | orchestrator | Friday 09 January 2026 00:49:50 +0000 (0:00:00.381) 0:03:45.075 ******** 2026-01-09 00:57:56.247841 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.247847 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.247853 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.247861 | orchestrator | 2026-01-09 00:57:56.247866 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-09 00:57:56.247873 | orchestrator | Friday 09 January 2026 00:49:52 +0000 (0:00:01.290) 0:03:46.365 ******** 2026-01-09 00:57:56.247879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.247885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.247891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.247898 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.247904 | orchestrator | 2026-01-09 00:57:56.247911 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-09 00:57:56.247917 | orchestrator | Friday 09 January 2026 00:49:53 +0000 (0:00:01.081) 0:03:47.447 ******** 2026-01-09 00:57:56.247924 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.247931 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.247936 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.247940 | orchestrator | 2026-01-09 00:57:56.247944 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-09 00:57:56.247948 | orchestrator | Friday 09 January 2026 00:49:53 +0000 (0:00:00.599) 0:03:48.047 ******** 2026-01-09 00:57:56.247952 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.247955 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.247959 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.247963 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.247967 | orchestrator | 2026-01-09 00:57:56.247971 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-09 00:57:56.247975 | orchestrator | Friday 09 January 2026 00:49:54 +0000 (0:00:00.921) 0:03:48.968 ******** 2026-01-09 00:57:56.247978 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.247982 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.247986 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.247991 | orchestrator | 2026-01-09 00:57:56.247995 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-09 00:57:56.247999 | orchestrator | Friday 09 January 2026 00:49:55 +0000 (0:00:00.608) 0:03:49.577 ******** 2026-01-09 00:57:56.248002 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.248006 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.248010 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.248014 | orchestrator | 2026-01-09 00:57:56.248018 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-09 00:57:56.248021 | orchestrator | Friday 09 January 2026 00:49:56 +0000 (0:00:01.202) 0:03:50.780 ******** 2026-01-09 00:57:56.248025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.248029 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.248033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.248037 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.248041 | orchestrator | 2026-01-09 00:57:56.248044 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-09 00:57:56.248048 | orchestrator | Friday 09 January 2026 00:49:57 +0000 (0:00:00.641) 0:03:51.421 ******** 2026-01-09 00:57:56.248052 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.248056 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.248060 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.248063 | orchestrator | 2026-01-09 00:57:56.248067 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-09 00:57:56.248071 | orchestrator | Friday 09 January 2026 00:49:57 +0000 (0:00:00.326) 0:03:51.747 ******** 2026-01-09 00:57:56.248075 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.248083 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.248087 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.248091 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248094 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248116 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248121 | orchestrator | 2026-01-09 00:57:56.248128 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-09 00:57:56.248132 | orchestrator | Friday 09 January 2026 00:49:58 +0000 (0:00:01.019) 0:03:52.767 ******** 2026-01-09 00:57:56.248136 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.248140 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.248143 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.248147 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.248151 | orchestrator | 2026-01-09 00:57:56.248155 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-09 00:57:56.248159 | orchestrator | Friday 09 January 2026 00:49:59 +0000 (0:00:00.883) 0:03:53.651 ******** 2026-01-09 00:57:56.248163 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248167 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248170 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248174 | orchestrator | 2026-01-09 00:57:56.248178 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-09 00:57:56.248182 | orchestrator | Friday 09 January 2026 00:49:59 +0000 (0:00:00.636) 0:03:54.288 ******** 2026-01-09 00:57:56.248185 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.248189 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.248193 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.248196 | orchestrator | 2026-01-09 00:57:56.248200 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-09 00:57:56.248204 | orchestrator | Friday 09 January 2026 00:50:01 +0000 (0:00:01.683) 0:03:55.971 ******** 2026-01-09 00:57:56.248208 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-09 00:57:56.248212 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-09 00:57:56.248216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-09 00:57:56.248219 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248223 | orchestrator | 2026-01-09 00:57:56.248227 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-09 00:57:56.248231 | orchestrator | Friday 09 January 2026 00:50:02 +0000 (0:00:00.647) 0:03:56.619 ******** 2026-01-09 00:57:56.248234 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248238 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248242 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248246 | orchestrator | 2026-01-09 00:57:56.248249 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-09 00:57:56.248253 | orchestrator | 2026-01-09 00:57:56.248257 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-09 00:57:56.248261 | orchestrator | Friday 09 January 2026 00:50:02 +0000 (0:00:00.572) 0:03:57.191 ******** 2026-01-09 00:57:56.248265 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.248268 | orchestrator | 2026-01-09 00:57:56.248272 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-09 00:57:56.248276 | orchestrator | Friday 09 January 2026 00:50:03 +0000 (0:00:00.642) 0:03:57.834 ******** 2026-01-09 00:57:56.248280 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.248283 | orchestrator | 2026-01-09 00:57:56.248287 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-09 00:57:56.248291 | orchestrator | Friday 09 January 2026 00:50:03 +0000 (0:00:00.470) 0:03:58.304 ******** 2026-01-09 00:57:56.248295 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248302 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248306 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248310 | orchestrator | 2026-01-09 00:57:56.248313 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-09 00:57:56.248317 | orchestrator | Friday 09 January 2026 00:50:04 +0000 (0:00:00.869) 0:03:59.174 ******** 2026-01-09 00:57:56.248321 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248325 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248328 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248332 | orchestrator | 2026-01-09 00:57:56.248336 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-09 00:57:56.248340 | orchestrator | Friday 09 January 2026 00:50:05 +0000 (0:00:00.316) 0:03:59.490 ******** 2026-01-09 00:57:56.248343 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248347 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248351 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248355 | orchestrator | 2026-01-09 00:57:56.248358 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-09 00:57:56.248362 | orchestrator | Friday 09 January 2026 00:50:05 +0000 (0:00:00.287) 0:03:59.777 ******** 2026-01-09 00:57:56.248366 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248370 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248373 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248377 | orchestrator | 2026-01-09 00:57:56.248381 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-09 00:57:56.248385 | orchestrator | Friday 09 January 2026 00:50:05 +0000 (0:00:00.283) 0:04:00.060 ******** 2026-01-09 00:57:56.248388 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248392 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248396 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248400 | orchestrator | 2026-01-09 00:57:56.248403 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-09 00:57:56.248407 | orchestrator | Friday 09 January 2026 00:50:06 +0000 (0:00:00.914) 0:04:00.975 ******** 2026-01-09 00:57:56.248411 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248455 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248458 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248462 | orchestrator | 2026-01-09 00:57:56.248466 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-09 00:57:56.248470 | orchestrator | Friday 09 January 2026 00:50:06 +0000 (0:00:00.268) 0:04:01.243 ******** 2026-01-09 00:57:56.248487 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248492 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248499 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248503 | orchestrator | 2026-01-09 00:57:56.248506 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-09 00:57:56.248510 | orchestrator | Friday 09 January 2026 00:50:07 +0000 (0:00:00.325) 0:04:01.569 ******** 2026-01-09 00:57:56.248514 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248518 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248522 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248525 | orchestrator | 2026-01-09 00:57:56.248529 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-09 00:57:56.248533 | orchestrator | Friday 09 January 2026 00:50:08 +0000 (0:00:00.767) 0:04:02.336 ******** 2026-01-09 00:57:56.248537 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248540 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248544 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248548 | orchestrator | 2026-01-09 00:57:56.248552 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-09 00:57:56.248556 | orchestrator | Friday 09 January 2026 00:50:08 +0000 (0:00:00.965) 0:04:03.301 ******** 2026-01-09 00:57:56.248560 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248563 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248571 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248575 | orchestrator | 2026-01-09 00:57:56.248579 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-09 00:57:56.248582 | orchestrator | Friday 09 January 2026 00:50:09 +0000 (0:00:00.486) 0:04:03.787 ******** 2026-01-09 00:57:56.248586 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248590 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248594 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248598 | orchestrator | 2026-01-09 00:57:56.248602 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-09 00:57:56.248605 | orchestrator | Friday 09 January 2026 00:50:09 +0000 (0:00:00.416) 0:04:04.204 ******** 2026-01-09 00:57:56.248609 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248613 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248617 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248620 | orchestrator | 2026-01-09 00:57:56.248624 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-09 00:57:56.248628 | orchestrator | Friday 09 January 2026 00:50:10 +0000 (0:00:00.360) 0:04:04.564 ******** 2026-01-09 00:57:56.248632 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248636 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248639 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248643 | orchestrator | 2026-01-09 00:57:56.248647 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-09 00:57:56.248651 | orchestrator | Friday 09 January 2026 00:50:10 +0000 (0:00:00.339) 0:04:04.904 ******** 2026-01-09 00:57:56.248655 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248658 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248662 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248666 | orchestrator | 2026-01-09 00:57:56.248670 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-09 00:57:56.248673 | orchestrator | Friday 09 January 2026 00:50:11 +0000 (0:00:00.580) 0:04:05.485 ******** 2026-01-09 00:57:56.248677 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248681 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248685 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248689 | orchestrator | 2026-01-09 00:57:56.248692 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-09 00:57:56.248696 | orchestrator | Friday 09 January 2026 00:50:11 +0000 (0:00:00.317) 0:04:05.802 ******** 2026-01-09 00:57:56.248700 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.248704 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248707 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.248711 | orchestrator | 2026-01-09 00:57:56.248715 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-09 00:57:56.248719 | orchestrator | Friday 09 January 2026 00:50:11 +0000 (0:00:00.368) 0:04:06.171 ******** 2026-01-09 00:57:56.248722 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248726 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248730 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248734 | orchestrator | 2026-01-09 00:57:56.248737 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-09 00:57:56.248741 | orchestrator | Friday 09 January 2026 00:50:12 +0000 (0:00:00.401) 0:04:06.573 ******** 2026-01-09 00:57:56.248745 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248749 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248752 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248756 | orchestrator | 2026-01-09 00:57:56.248760 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-09 00:57:56.248764 | orchestrator | Friday 09 January 2026 00:50:12 +0000 (0:00:00.662) 0:04:07.235 ******** 2026-01-09 00:57:56.248767 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248771 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248775 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248779 | orchestrator | 2026-01-09 00:57:56.248782 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-09 00:57:56.248793 | orchestrator | Friday 09 January 2026 00:50:13 +0000 (0:00:00.644) 0:04:07.879 ******** 2026-01-09 00:57:56.248796 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248800 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248804 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248808 | orchestrator | 2026-01-09 00:57:56.248811 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-09 00:57:56.248815 | orchestrator | Friday 09 January 2026 00:50:14 +0000 (0:00:00.447) 0:04:08.327 ******** 2026-01-09 00:57:56.248819 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.248823 | orchestrator | 2026-01-09 00:57:56.248827 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-09 00:57:56.248831 | orchestrator | Friday 09 January 2026 00:50:15 +0000 (0:00:01.319) 0:04:09.647 ******** 2026-01-09 00:57:56.248835 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.248838 | orchestrator | 2026-01-09 00:57:56.248853 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-09 00:57:56.248860 | orchestrator | Friday 09 January 2026 00:50:15 +0000 (0:00:00.176) 0:04:09.823 ******** 2026-01-09 00:57:56.248864 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-09 00:57:56.248868 | orchestrator | 2026-01-09 00:57:56.248872 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-09 00:57:56.248876 | orchestrator | Friday 09 January 2026 00:50:16 +0000 (0:00:01.280) 0:04:11.103 ******** 2026-01-09 00:57:56.248879 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248883 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248887 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248891 | orchestrator | 2026-01-09 00:57:56.248894 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-09 00:57:56.248898 | orchestrator | Friday 09 January 2026 00:50:17 +0000 (0:00:00.360) 0:04:11.464 ******** 2026-01-09 00:57:56.248902 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248906 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.248909 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.248913 | orchestrator | 2026-01-09 00:57:56.248917 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-09 00:57:56.248921 | orchestrator | Friday 09 January 2026 00:50:17 +0000 (0:00:00.370) 0:04:11.834 ******** 2026-01-09 00:57:56.248925 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.248928 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.248932 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.248936 | orchestrator | 2026-01-09 00:57:56.248940 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-09 00:57:56.248943 | orchestrator | Friday 09 January 2026 00:50:19 +0000 (0:00:01.590) 0:04:13.425 ******** 2026-01-09 00:57:56.248947 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.248951 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.248955 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.248958 | orchestrator | 2026-01-09 00:57:56.248962 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-09 00:57:56.248966 | orchestrator | Friday 09 January 2026 00:50:20 +0000 (0:00:01.323) 0:04:14.748 ******** 2026-01-09 00:57:56.248970 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.248973 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.248977 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.248981 | orchestrator | 2026-01-09 00:57:56.248985 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-09 00:57:56.248988 | orchestrator | Friday 09 January 2026 00:50:21 +0000 (0:00:00.908) 0:04:15.657 ******** 2026-01-09 00:57:56.248992 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.248996 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.249000 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.249004 | orchestrator | 2026-01-09 00:57:56.249013 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-09 00:57:56.249017 | orchestrator | Friday 09 January 2026 00:50:22 +0000 (0:00:00.881) 0:04:16.539 ******** 2026-01-09 00:57:56.249021 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.249025 | orchestrator | 2026-01-09 00:57:56.249029 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-09 00:57:56.249032 | orchestrator | Friday 09 January 2026 00:50:24 +0000 (0:00:01.922) 0:04:18.461 ******** 2026-01-09 00:57:56.249036 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.249040 | orchestrator | 2026-01-09 00:57:56.249044 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-09 00:57:56.249048 | orchestrator | Friday 09 January 2026 00:50:25 +0000 (0:00:00.935) 0:04:19.397 ******** 2026-01-09 00:57:56.249051 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.249056 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.249060 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-09 00:57:56.249063 | orchestrator | changed: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-09 00:57:56.249067 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-09 00:57:56.249071 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-09 00:57:56.249075 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-09 00:57:56.249078 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-01-09 00:57:56.249082 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-09 00:57:56.249086 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-01-09 00:57:56.249090 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-09 00:57:56.249101 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-01-09 00:57:56.249105 | orchestrator | 2026-01-09 00:57:56.249109 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-09 00:57:56.249113 | orchestrator | Friday 09 January 2026 00:50:29 +0000 (0:00:04.059) 0:04:23.456 ******** 2026-01-09 00:57:56.249123 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.249127 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.249130 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.249134 | orchestrator | 2026-01-09 00:57:56.249138 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-09 00:57:56.249141 | orchestrator | Friday 09 January 2026 00:50:30 +0000 (0:00:01.549) 0:04:25.006 ******** 2026-01-09 00:57:56.249145 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.249149 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.249153 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.249157 | orchestrator | 2026-01-09 00:57:56.249161 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-09 00:57:56.249165 | orchestrator | Friday 09 January 2026 00:50:31 +0000 (0:00:00.683) 0:04:25.689 ******** 2026-01-09 00:57:56.249168 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.249172 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.249176 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.249179 | orchestrator | 2026-01-09 00:57:56.249183 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-09 00:57:56.249187 | orchestrator | Friday 09 January 2026 00:50:32 +0000 (0:00:00.875) 0:04:26.565 ******** 2026-01-09 00:57:56.249203 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.249207 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.249214 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.249218 | orchestrator | 2026-01-09 00:57:56.249222 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-09 00:57:56.249225 | orchestrator | Friday 09 January 2026 00:50:33 +0000 (0:00:01.741) 0:04:28.306 ******** 2026-01-09 00:57:56.249229 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.249233 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.249240 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.249244 | orchestrator | 2026-01-09 00:57:56.249248 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-09 00:57:56.249252 | orchestrator | Friday 09 January 2026 00:50:35 +0000 (0:00:01.562) 0:04:29.869 ******** 2026-01-09 00:57:56.249255 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249259 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.249263 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.249267 | orchestrator | 2026-01-09 00:57:56.249270 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-09 00:57:56.249274 | orchestrator | Friday 09 January 2026 00:50:36 +0000 (0:00:00.720) 0:04:30.590 ******** 2026-01-09 00:57:56.249278 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.249282 | orchestrator | 2026-01-09 00:57:56.249286 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-09 00:57:56.249289 | orchestrator | Friday 09 January 2026 00:50:37 +0000 (0:00:00.970) 0:04:31.561 ******** 2026-01-09 00:57:56.249293 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249297 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.249301 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.249304 | orchestrator | 2026-01-09 00:57:56.249308 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-09 00:57:56.249312 | orchestrator | Friday 09 January 2026 00:50:37 +0000 (0:00:00.372) 0:04:31.933 ******** 2026-01-09 00:57:56.249316 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249319 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.249323 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.249327 | orchestrator | 2026-01-09 00:57:56.249331 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-09 00:57:56.249334 | orchestrator | Friday 09 January 2026 00:50:38 +0000 (0:00:00.434) 0:04:32.368 ******** 2026-01-09 00:57:56.249338 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-01-09 00:57:56.249342 | orchestrator | 2026-01-09 00:57:56.249346 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-09 00:57:56.249350 | orchestrator | Friday 09 January 2026 00:50:39 +0000 (0:00:00.995) 0:04:33.363 ******** 2026-01-09 00:57:56.249354 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.249357 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.249361 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.249365 | orchestrator | 2026-01-09 00:57:56.249369 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-09 00:57:56.249372 | orchestrator | Friday 09 January 2026 00:50:41 +0000 (0:00:02.133) 0:04:35.497 ******** 2026-01-09 00:57:56.249376 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.249380 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.249384 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.249387 | orchestrator | 2026-01-09 00:57:56.249391 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-09 00:57:56.249395 | orchestrator | Friday 09 January 2026 00:50:42 +0000 (0:00:01.404) 0:04:36.901 ******** 2026-01-09 00:57:56.249399 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.249402 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.249407 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.249410 | orchestrator | 2026-01-09 00:57:56.249429 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-09 00:57:56.249433 | orchestrator | Friday 09 January 2026 00:50:44 +0000 (0:00:02.087) 0:04:38.989 ******** 2026-01-09 00:57:56.249437 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.249440 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.249444 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.249448 | orchestrator | 2026-01-09 00:57:56.249452 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-09 00:57:56.249459 | orchestrator | Friday 09 January 2026 00:50:46 +0000 (0:00:02.090) 0:04:41.080 ******** 2026-01-09 00:57:56.249463 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.249467 | orchestrator | 2026-01-09 00:57:56.249471 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-09 00:57:56.249475 | orchestrator | Friday 09 January 2026 00:50:47 +0000 (0:00:00.565) 0:04:41.646 ******** 2026-01-09 00:57:56.249478 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-09 00:57:56.249482 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.249486 | orchestrator | 2026-01-09 00:57:56.249490 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-09 00:57:56.249493 | orchestrator | Friday 09 January 2026 00:51:09 +0000 (0:00:21.919) 0:05:03.565 ******** 2026-01-09 00:57:56.249497 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.249501 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.249505 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.249509 | orchestrator | 2026-01-09 00:57:56.249515 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-09 00:57:56.249520 | orchestrator | Friday 09 January 2026 00:51:19 +0000 (0:00:09.808) 0:05:13.374 ******** 2026-01-09 00:57:56.249526 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249531 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.249537 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.249543 | orchestrator | 2026-01-09 00:57:56.249548 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-09 00:57:56.249570 | orchestrator | Friday 09 January 2026 00:51:19 +0000 (0:00:00.462) 0:05:13.836 ******** 2026-01-09 00:57:56.249581 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__db09779b9a3719be88936f2538c7c5db64196143'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-09 00:57:56.249587 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__db09779b9a3719be88936f2538c7c5db64196143'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-09 00:57:56.249592 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__db09779b9a3719be88936f2538c7c5db64196143'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-09 00:57:56.249598 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__db09779b9a3719be88936f2538c7c5db64196143'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-09 00:57:56.249602 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__db09779b9a3719be88936f2538c7c5db64196143'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-09 00:57:56.249608 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__db09779b9a3719be88936f2538c7c5db64196143'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__db09779b9a3719be88936f2538c7c5db64196143'}])  2026-01-09 00:57:56.249618 | orchestrator | 2026-01-09 00:57:56.249622 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-09 00:57:56.249625 | orchestrator | Friday 09 January 2026 00:51:35 +0000 (0:00:15.680) 0:05:29.516 ******** 2026-01-09 00:57:56.249629 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249633 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.249637 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.249640 | orchestrator | 2026-01-09 00:57:56.249644 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-09 00:57:56.249648 | orchestrator | Friday 09 January 2026 00:51:35 +0000 (0:00:00.318) 0:05:29.835 ******** 2026-01-09 00:57:56.249652 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.249655 | orchestrator | 2026-01-09 00:57:56.249659 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-09 00:57:56.249663 | orchestrator | Friday 09 January 2026 00:51:36 +0000 (0:00:00.694) 0:05:30.529 ******** 2026-01-09 00:57:56.249667 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.249671 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.249674 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.249678 | orchestrator | 2026-01-09 00:57:56.249682 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-09 00:57:56.249686 | orchestrator | Friday 09 January 2026 00:51:36 +0000 (0:00:00.390) 0:05:30.919 ******** 2026-01-09 00:57:56.249689 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249693 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.249697 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.249700 | orchestrator | 2026-01-09 00:57:56.249704 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-09 00:57:56.249708 | orchestrator | Friday 09 January 2026 00:51:36 +0000 (0:00:00.338) 0:05:31.258 ******** 2026-01-09 00:57:56.249712 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-09 00:57:56.249716 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-09 00:57:56.249719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-09 00:57:56.249723 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249727 | orchestrator | 2026-01-09 00:57:56.249731 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-09 00:57:56.249735 | orchestrator | Friday 09 January 2026 00:51:37 +0000 (0:00:00.875) 0:05:32.134 ******** 2026-01-09 00:57:56.249739 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.249754 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.249759 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.249763 | orchestrator | 2026-01-09 00:57:56.249769 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-09 00:57:56.249773 | orchestrator | 2026-01-09 00:57:56.249777 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-09 00:57:56.249781 | orchestrator | Friday 09 January 2026 00:51:38 +0000 (0:00:00.986) 0:05:33.120 ******** 2026-01-09 00:57:56.249785 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.249788 | orchestrator | 2026-01-09 00:57:56.249792 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-09 00:57:56.249796 | orchestrator | Friday 09 January 2026 00:51:39 +0000 (0:00:00.520) 0:05:33.641 ******** 2026-01-09 00:57:56.249800 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.249809 | orchestrator | 2026-01-09 00:57:56.249813 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-09 00:57:56.249816 | orchestrator | Friday 09 January 2026 00:51:40 +0000 (0:00:00.803) 0:05:34.444 ******** 2026-01-09 00:57:56.249821 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.249825 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.249828 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.249832 | orchestrator | 2026-01-09 00:57:56.249836 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-09 00:57:56.249840 | orchestrator | Friday 09 January 2026 00:51:40 +0000 (0:00:00.846) 0:05:35.291 ******** 2026-01-09 00:57:56.249843 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249847 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.249851 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.249855 | orchestrator | 2026-01-09 00:57:56.249858 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-09 00:57:56.249862 | orchestrator | Friday 09 January 2026 00:51:41 +0000 (0:00:00.365) 0:05:35.657 ******** 2026-01-09 00:57:56.249866 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249870 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.249874 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.249877 | orchestrator | 2026-01-09 00:57:56.249881 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-09 00:57:56.249885 | orchestrator | Friday 09 January 2026 00:51:41 +0000 (0:00:00.605) 0:05:36.262 ******** 2026-01-09 00:57:56.249889 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249893 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.249896 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.249900 | orchestrator | 2026-01-09 00:57:56.249904 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-09 00:57:56.249908 | orchestrator | Friday 09 January 2026 00:51:42 +0000 (0:00:00.302) 0:05:36.565 ******** 2026-01-09 00:57:56.249911 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.249915 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.249919 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.249923 | orchestrator | 2026-01-09 00:57:56.249927 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-09 00:57:56.249931 | orchestrator | Friday 09 January 2026 00:51:42 +0000 (0:00:00.743) 0:05:37.308 ******** 2026-01-09 00:57:56.249935 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249939 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.249943 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.249946 | orchestrator | 2026-01-09 00:57:56.249950 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-09 00:57:56.249954 | orchestrator | Friday 09 January 2026 00:51:43 +0000 (0:00:00.330) 0:05:37.639 ******** 2026-01-09 00:57:56.249959 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.249966 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.249973 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.249980 | orchestrator | 2026-01-09 00:57:56.249984 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-09 00:57:56.249988 | orchestrator | Friday 09 January 2026 00:51:43 +0000 (0:00:00.584) 0:05:38.223 ******** 2026-01-09 00:57:56.249992 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.249996 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.250000 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.250004 | orchestrator | 2026-01-09 00:57:56.250007 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-09 00:57:56.250048 | orchestrator | Friday 09 January 2026 00:51:44 +0000 (0:00:00.837) 0:05:39.060 ******** 2026-01-09 00:57:56.250054 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.250058 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.250062 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.250066 | orchestrator | 2026-01-09 00:57:56.250073 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-09 00:57:56.250085 | orchestrator | Friday 09 January 2026 00:51:45 +0000 (0:00:00.848) 0:05:39.909 ******** 2026-01-09 00:57:56.250092 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250096 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250100 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.250103 | orchestrator | 2026-01-09 00:57:56.250107 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-09 00:57:56.250111 | orchestrator | Friday 09 January 2026 00:51:45 +0000 (0:00:00.329) 0:05:40.239 ******** 2026-01-09 00:57:56.250115 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.250119 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.250122 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.250126 | orchestrator | 2026-01-09 00:57:56.250130 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-09 00:57:56.250134 | orchestrator | Friday 09 January 2026 00:51:46 +0000 (0:00:00.413) 0:05:40.652 ******** 2026-01-09 00:57:56.250138 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250142 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250145 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.250149 | orchestrator | 2026-01-09 00:57:56.250153 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-09 00:57:56.250174 | orchestrator | Friday 09 January 2026 00:51:46 +0000 (0:00:00.651) 0:05:41.304 ******** 2026-01-09 00:57:56.250179 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250186 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250190 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.250193 | orchestrator | 2026-01-09 00:57:56.250197 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-09 00:57:56.250201 | orchestrator | Friday 09 January 2026 00:51:47 +0000 (0:00:00.344) 0:05:41.648 ******** 2026-01-09 00:57:56.250205 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250209 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250212 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.250216 | orchestrator | 2026-01-09 00:57:56.250220 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-09 00:57:56.250223 | orchestrator | Friday 09 January 2026 00:51:47 +0000 (0:00:00.333) 0:05:41.982 ******** 2026-01-09 00:57:56.250227 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250231 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250235 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.250238 | orchestrator | 2026-01-09 00:57:56.250242 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-09 00:57:56.250246 | orchestrator | Friday 09 January 2026 00:51:48 +0000 (0:00:00.337) 0:05:42.319 ******** 2026-01-09 00:57:56.250249 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250253 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250257 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.250261 | orchestrator | 2026-01-09 00:57:56.250265 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-09 00:57:56.250268 | orchestrator | Friday 09 January 2026 00:51:48 +0000 (0:00:00.642) 0:05:42.961 ******** 2026-01-09 00:57:56.250272 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.250276 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.250280 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.250284 | orchestrator | 2026-01-09 00:57:56.250287 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-09 00:57:56.250291 | orchestrator | Friday 09 January 2026 00:51:49 +0000 (0:00:00.355) 0:05:43.317 ******** 2026-01-09 00:57:56.250296 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.250299 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.250303 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.250307 | orchestrator | 2026-01-09 00:57:56.250311 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-09 00:57:56.250314 | orchestrator | Friday 09 January 2026 00:51:49 +0000 (0:00:00.345) 0:05:43.662 ******** 2026-01-09 00:57:56.250322 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.250326 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.250330 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.250333 | orchestrator | 2026-01-09 00:57:56.250337 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-09 00:57:56.250341 | orchestrator | Friday 09 January 2026 00:51:50 +0000 (0:00:00.801) 0:05:44.463 ******** 2026-01-09 00:57:56.250345 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-09 00:57:56.250349 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-09 00:57:56.250353 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-09 00:57:56.250356 | orchestrator | 2026-01-09 00:57:56.250360 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-09 00:57:56.250364 | orchestrator | Friday 09 January 2026 00:51:50 +0000 (0:00:00.664) 0:05:45.127 ******** 2026-01-09 00:57:56.250368 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.250371 | orchestrator | 2026-01-09 00:57:56.250375 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-09 00:57:56.250379 | orchestrator | Friday 09 January 2026 00:51:51 +0000 (0:00:00.530) 0:05:45.658 ******** 2026-01-09 00:57:56.250383 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.250386 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.250390 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.250394 | orchestrator | 2026-01-09 00:57:56.250398 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-09 00:57:56.250402 | orchestrator | Friday 09 January 2026 00:51:52 +0000 (0:00:00.677) 0:05:46.335 ******** 2026-01-09 00:57:56.250406 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250409 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250461 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.250465 | orchestrator | 2026-01-09 00:57:56.250469 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-09 00:57:56.250473 | orchestrator | Friday 09 January 2026 00:51:52 +0000 (0:00:00.620) 0:05:46.955 ******** 2026-01-09 00:57:56.250477 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-09 00:57:56.250481 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-09 00:57:56.250484 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-09 00:57:56.250488 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-09 00:57:56.250492 | orchestrator | 2026-01-09 00:57:56.250496 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-09 00:57:56.250500 | orchestrator | Friday 09 January 2026 00:52:03 +0000 (0:00:10.603) 0:05:57.559 ******** 2026-01-09 00:57:56.250503 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.250507 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.250511 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.250515 | orchestrator | 2026-01-09 00:57:56.250518 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-09 00:57:56.250522 | orchestrator | Friday 09 January 2026 00:52:03 +0000 (0:00:00.342) 0:05:57.902 ******** 2026-01-09 00:57:56.250526 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-09 00:57:56.250530 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-09 00:57:56.250534 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-09 00:57:56.250537 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-09 00:57:56.250541 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.250561 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.250566 | orchestrator | 2026-01-09 00:57:56.250573 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-09 00:57:56.250577 | orchestrator | Friday 09 January 2026 00:52:05 +0000 (0:00:02.260) 0:06:00.162 ******** 2026-01-09 00:57:56.250585 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-09 00:57:56.250589 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-09 00:57:56.250593 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-09 00:57:56.250597 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-09 00:57:56.250600 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-09 00:57:56.250604 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-09 00:57:56.250608 | orchestrator | 2026-01-09 00:57:56.250612 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-09 00:57:56.250615 | orchestrator | Friday 09 January 2026 00:52:07 +0000 (0:00:01.234) 0:06:01.397 ******** 2026-01-09 00:57:56.250619 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.250623 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.250627 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.250631 | orchestrator | 2026-01-09 00:57:56.250634 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-09 00:57:56.250638 | orchestrator | Friday 09 January 2026 00:52:08 +0000 (0:00:01.053) 0:06:02.451 ******** 2026-01-09 00:57:56.250642 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250646 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250649 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.250653 | orchestrator | 2026-01-09 00:57:56.250657 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-09 00:57:56.250661 | orchestrator | Friday 09 January 2026 00:52:08 +0000 (0:00:00.308) 0:06:02.760 ******** 2026-01-09 00:57:56.250665 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250668 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250672 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.250676 | orchestrator | 2026-01-09 00:57:56.250680 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-09 00:57:56.250684 | orchestrator | Friday 09 January 2026 00:52:08 +0000 (0:00:00.377) 0:06:03.137 ******** 2026-01-09 00:57:56.250688 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.250691 | orchestrator | 2026-01-09 00:57:56.250695 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-09 00:57:56.250699 | orchestrator | Friday 09 January 2026 00:52:09 +0000 (0:00:00.825) 0:06:03.962 ******** 2026-01-09 00:57:56.250703 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250706 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250710 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.250714 | orchestrator | 2026-01-09 00:57:56.250718 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-09 00:57:56.250721 | orchestrator | Friday 09 January 2026 00:52:09 +0000 (0:00:00.348) 0:06:04.310 ******** 2026-01-09 00:57:56.250725 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250729 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250733 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.250736 | orchestrator | 2026-01-09 00:57:56.250740 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-09 00:57:56.250744 | orchestrator | Friday 09 January 2026 00:52:10 +0000 (0:00:00.324) 0:06:04.635 ******** 2026-01-09 00:57:56.250748 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.250752 | orchestrator | 2026-01-09 00:57:56.250755 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-09 00:57:56.250759 | orchestrator | Friday 09 January 2026 00:52:11 +0000 (0:00:00.790) 0:06:05.426 ******** 2026-01-09 00:57:56.250763 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.250767 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.250770 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.250774 | orchestrator | 2026-01-09 00:57:56.250778 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-09 00:57:56.250785 | orchestrator | Friday 09 January 2026 00:52:12 +0000 (0:00:01.405) 0:06:06.832 ******** 2026-01-09 00:57:56.250789 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.250793 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.250797 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.250801 | orchestrator | 2026-01-09 00:57:56.250804 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-09 00:57:56.250808 | orchestrator | Friday 09 January 2026 00:52:13 +0000 (0:00:01.215) 0:06:08.047 ******** 2026-01-09 00:57:56.250812 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.250816 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.250821 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.250827 | orchestrator | 2026-01-09 00:57:56.250833 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-09 00:57:56.250839 | orchestrator | Friday 09 January 2026 00:52:15 +0000 (0:00:01.998) 0:06:10.046 ******** 2026-01-09 00:57:56.250846 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.250852 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.250858 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.250863 | orchestrator | 2026-01-09 00:57:56.250869 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-09 00:57:56.250874 | orchestrator | Friday 09 January 2026 00:52:18 +0000 (0:00:02.382) 0:06:12.428 ******** 2026-01-09 00:57:56.250881 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.250887 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.250893 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-09 00:57:56.250899 | orchestrator | 2026-01-09 00:57:56.250903 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-09 00:57:56.250907 | orchestrator | Friday 09 January 2026 00:52:18 +0000 (0:00:00.435) 0:06:12.863 ******** 2026-01-09 00:57:56.250925 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-09 00:57:56.250934 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-09 00:57:56.250938 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-09 00:57:56.250942 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-09 00:57:56.250945 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-09 00:57:56.250949 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-09 00:57:56.250953 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-09 00:57:56.250957 | orchestrator | 2026-01-09 00:57:56.250960 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-09 00:57:56.250964 | orchestrator | Friday 09 January 2026 00:52:55 +0000 (0:00:36.575) 0:06:49.438 ******** 2026-01-09 00:57:56.250968 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-09 00:57:56.250972 | orchestrator | 2026-01-09 00:57:56.250976 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-09 00:57:56.250979 | orchestrator | Friday 09 January 2026 00:52:56 +0000 (0:00:01.366) 0:06:50.805 ******** 2026-01-09 00:57:56.250983 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.250987 | orchestrator | 2026-01-09 00:57:56.250991 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-09 00:57:56.250994 | orchestrator | Friday 09 January 2026 00:52:56 +0000 (0:00:00.296) 0:06:51.102 ******** 2026-01-09 00:57:56.250998 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.251002 | orchestrator | 2026-01-09 00:57:56.251006 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-09 00:57:56.251009 | orchestrator | Friday 09 January 2026 00:52:56 +0000 (0:00:00.164) 0:06:51.266 ******** 2026-01-09 00:57:56.251020 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-09 00:57:56.251023 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-09 00:57:56.251027 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-09 00:57:56.251031 | orchestrator | 2026-01-09 00:57:56.251035 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-09 00:57:56.251039 | orchestrator | Friday 09 January 2026 00:53:03 +0000 (0:00:06.734) 0:06:58.000 ******** 2026-01-09 00:57:56.251042 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-09 00:57:56.251046 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-09 00:57:56.251050 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-09 00:57:56.251054 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-09 00:57:56.251057 | orchestrator | 2026-01-09 00:57:56.251061 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-09 00:57:56.251065 | orchestrator | Friday 09 January 2026 00:53:09 +0000 (0:00:05.469) 0:07:03.470 ******** 2026-01-09 00:57:56.251069 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.251073 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.251076 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.251080 | orchestrator | 2026-01-09 00:57:56.251084 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-09 00:57:56.251088 | orchestrator | Friday 09 January 2026 00:53:09 +0000 (0:00:00.677) 0:07:04.148 ******** 2026-01-09 00:57:56.251091 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.251095 | orchestrator | 2026-01-09 00:57:56.251099 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-09 00:57:56.251104 | orchestrator | Friday 09 January 2026 00:53:10 +0000 (0:00:00.860) 0:07:05.008 ******** 2026-01-09 00:57:56.251111 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.251117 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.251124 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.251130 | orchestrator | 2026-01-09 00:57:56.251136 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-09 00:57:56.251142 | orchestrator | Friday 09 January 2026 00:53:11 +0000 (0:00:00.326) 0:07:05.335 ******** 2026-01-09 00:57:56.251149 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.251155 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.251161 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.251168 | orchestrator | 2026-01-09 00:57:56.251174 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-09 00:57:56.251180 | orchestrator | Friday 09 January 2026 00:53:12 +0000 (0:00:01.397) 0:07:06.733 ******** 2026-01-09 00:57:56.251186 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-09 00:57:56.251192 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-09 00:57:56.251198 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-09 00:57:56.251205 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.251211 | orchestrator | 2026-01-09 00:57:56.251218 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-09 00:57:56.251225 | orchestrator | Friday 09 January 2026 00:53:13 +0000 (0:00:00.631) 0:07:07.364 ******** 2026-01-09 00:57:56.251231 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.251238 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.251244 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.251250 | orchestrator | 2026-01-09 00:57:56.251257 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-09 00:57:56.251263 | orchestrator | 2026-01-09 00:57:56.251269 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-09 00:57:56.251310 | orchestrator | Friday 09 January 2026 00:53:13 +0000 (0:00:00.816) 0:07:08.180 ******** 2026-01-09 00:57:56.251316 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.251320 | orchestrator | 2026-01-09 00:57:56.251324 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-09 00:57:56.251329 | orchestrator | Friday 09 January 2026 00:53:14 +0000 (0:00:00.557) 0:07:08.738 ******** 2026-01-09 00:57:56.251335 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.251342 | orchestrator | 2026-01-09 00:57:56.251347 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-09 00:57:56.251351 | orchestrator | Friday 09 January 2026 00:53:15 +0000 (0:00:00.802) 0:07:09.541 ******** 2026-01-09 00:57:56.251355 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.251359 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.251363 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.251366 | orchestrator | 2026-01-09 00:57:56.251370 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-09 00:57:56.251374 | orchestrator | Friday 09 January 2026 00:53:15 +0000 (0:00:00.354) 0:07:09.895 ******** 2026-01-09 00:57:56.251378 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.251382 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.251386 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.251389 | orchestrator | 2026-01-09 00:57:56.251393 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-09 00:57:56.251397 | orchestrator | Friday 09 January 2026 00:53:16 +0000 (0:00:00.717) 0:07:10.612 ******** 2026-01-09 00:57:56.251401 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.251405 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.251408 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.251428 | orchestrator | 2026-01-09 00:57:56.251435 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-09 00:57:56.251440 | orchestrator | Friday 09 January 2026 00:53:17 +0000 (0:00:00.712) 0:07:11.325 ******** 2026-01-09 00:57:56.251446 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.251451 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.251459 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.251468 | orchestrator | 2026-01-09 00:57:56.251475 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-09 00:57:56.251480 | orchestrator | Friday 09 January 2026 00:53:18 +0000 (0:00:01.156) 0:07:12.482 ******** 2026-01-09 00:57:56.251486 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.251492 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.251498 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.251504 | orchestrator | 2026-01-09 00:57:56.251510 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-09 00:57:56.251515 | orchestrator | Friday 09 January 2026 00:53:18 +0000 (0:00:00.330) 0:07:12.812 ******** 2026-01-09 00:57:56.251521 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.251527 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.251533 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.251539 | orchestrator | 2026-01-09 00:57:56.251545 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-09 00:57:56.251551 | orchestrator | Friday 09 January 2026 00:53:18 +0000 (0:00:00.323) 0:07:13.135 ******** 2026-01-09 00:57:56.251557 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.251563 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.251569 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.251575 | orchestrator | 2026-01-09 00:57:56.251581 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-09 00:57:56.251587 | orchestrator | Friday 09 January 2026 00:53:19 +0000 (0:00:00.294) 0:07:13.430 ******** 2026-01-09 00:57:56.251593 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.251601 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.251605 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.251609 | orchestrator | 2026-01-09 00:57:56.251613 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-09 00:57:56.251617 | orchestrator | Friday 09 January 2026 00:53:20 +0000 (0:00:01.083) 0:07:14.514 ******** 2026-01-09 00:57:56.251620 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.251625 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.251628 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.251632 | orchestrator | 2026-01-09 00:57:56.251636 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-09 00:57:56.251640 | orchestrator | Friday 09 January 2026 00:53:21 +0000 (0:00:00.934) 0:07:15.448 ******** 2026-01-09 00:57:56.251643 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.251647 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.251651 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.251655 | orchestrator | 2026-01-09 00:57:56.251658 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-09 00:57:56.251662 | orchestrator | Friday 09 January 2026 00:53:21 +0000 (0:00:00.329) 0:07:15.777 ******** 2026-01-09 00:57:56.251666 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.251670 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.251674 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.251677 | orchestrator | 2026-01-09 00:57:56.251681 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-09 00:57:56.251685 | orchestrator | Friday 09 January 2026 00:53:21 +0000 (0:00:00.331) 0:07:16.109 ******** 2026-01-09 00:57:56.251689 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.251693 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.251696 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.251700 | orchestrator | 2026-01-09 00:57:56.251704 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-09 00:57:56.251708 | orchestrator | Friday 09 January 2026 00:53:22 +0000 (0:00:00.606) 0:07:16.715 ******** 2026-01-09 00:57:56.251712 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.251715 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.251719 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.251723 | orchestrator | 2026-01-09 00:57:56.251727 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-09 00:57:56.251735 | orchestrator | Friday 09 January 2026 00:53:22 +0000 (0:00:00.332) 0:07:17.047 ******** 2026-01-09 00:57:56.251747 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.251753 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.251758 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.251764 | orchestrator | 2026-01-09 00:57:56.251770 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-09 00:57:56.251776 | orchestrator | Friday 09 January 2026 00:53:23 +0000 (0:00:00.346) 0:07:17.393 ******** 2026-01-09 00:57:56.251781 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.251787 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.251793 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.251798 | orchestrator | 2026-01-09 00:57:56.251804 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-09 00:57:56.251810 | orchestrator | Friday 09 January 2026 00:53:23 +0000 (0:00:00.354) 0:07:17.747 ******** 2026-01-09 00:57:56.251816 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.251822 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.251829 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.251835 | orchestrator | 2026-01-09 00:57:56.251842 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-09 00:57:56.251848 | orchestrator | Friday 09 January 2026 00:53:24 +0000 (0:00:00.591) 0:07:18.339 ******** 2026-01-09 00:57:56.251854 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.251860 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.251874 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.251878 | orchestrator | 2026-01-09 00:57:56.251882 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-09 00:57:56.251886 | orchestrator | Friday 09 January 2026 00:53:24 +0000 (0:00:00.410) 0:07:18.749 ******** 2026-01-09 00:57:56.251889 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.251893 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.251897 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.251901 | orchestrator | 2026-01-09 00:57:56.251905 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-09 00:57:56.251909 | orchestrator | Friday 09 January 2026 00:53:24 +0000 (0:00:00.367) 0:07:19.116 ******** 2026-01-09 00:57:56.251912 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.251916 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.251920 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.251924 | orchestrator | 2026-01-09 00:57:56.251928 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-09 00:57:56.251931 | orchestrator | Friday 09 January 2026 00:53:25 +0000 (0:00:00.803) 0:07:19.920 ******** 2026-01-09 00:57:56.251935 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.251939 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.251943 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.251947 | orchestrator | 2026-01-09 00:57:56.251951 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-09 00:57:56.251954 | orchestrator | Friday 09 January 2026 00:53:25 +0000 (0:00:00.373) 0:07:20.293 ******** 2026-01-09 00:57:56.251958 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-09 00:57:56.251962 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-09 00:57:56.251966 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-09 00:57:56.251970 | orchestrator | 2026-01-09 00:57:56.251974 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-09 00:57:56.251978 | orchestrator | Friday 09 January 2026 00:53:26 +0000 (0:00:00.649) 0:07:20.943 ******** 2026-01-09 00:57:56.251982 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.251986 | orchestrator | 2026-01-09 00:57:56.251990 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-09 00:57:56.251993 | orchestrator | Friday 09 January 2026 00:53:27 +0000 (0:00:00.553) 0:07:21.496 ******** 2026-01-09 00:57:56.251997 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252001 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.252005 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.252009 | orchestrator | 2026-01-09 00:57:56.252012 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-09 00:57:56.252016 | orchestrator | Friday 09 January 2026 00:53:27 +0000 (0:00:00.554) 0:07:22.051 ******** 2026-01-09 00:57:56.252020 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252024 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.252028 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.252031 | orchestrator | 2026-01-09 00:57:56.252035 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-09 00:57:56.252039 | orchestrator | Friday 09 January 2026 00:53:28 +0000 (0:00:00.324) 0:07:22.376 ******** 2026-01-09 00:57:56.252043 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.252047 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.252050 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.252054 | orchestrator | 2026-01-09 00:57:56.252058 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-09 00:57:56.252062 | orchestrator | Friday 09 January 2026 00:53:28 +0000 (0:00:00.696) 0:07:23.073 ******** 2026-01-09 00:57:56.252066 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.252069 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.252073 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.252081 | orchestrator | 2026-01-09 00:57:56.252084 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-09 00:57:56.252088 | orchestrator | Friday 09 January 2026 00:53:29 +0000 (0:00:00.379) 0:07:23.452 ******** 2026-01-09 00:57:56.252092 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-09 00:57:56.252097 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-09 00:57:56.252101 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-09 00:57:56.252110 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-09 00:57:56.252117 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-09 00:57:56.252121 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-09 00:57:56.252124 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-09 00:57:56.252128 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-09 00:57:56.252132 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-09 00:57:56.252136 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-09 00:57:56.252139 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-09 00:57:56.252143 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-09 00:57:56.252147 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-09 00:57:56.252151 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-09 00:57:56.252154 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-09 00:57:56.252158 | orchestrator | 2026-01-09 00:57:56.252162 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-09 00:57:56.252166 | orchestrator | Friday 09 January 2026 00:53:32 +0000 (0:00:03.769) 0:07:27.221 ******** 2026-01-09 00:57:56.252169 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252173 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.252177 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.252181 | orchestrator | 2026-01-09 00:57:56.252185 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-09 00:57:56.252191 | orchestrator | Friday 09 January 2026 00:53:33 +0000 (0:00:00.376) 0:07:27.598 ******** 2026-01-09 00:57:56.252198 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.252203 | orchestrator | 2026-01-09 00:57:56.252209 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-09 00:57:56.252215 | orchestrator | Friday 09 January 2026 00:53:33 +0000 (0:00:00.576) 0:07:28.175 ******** 2026-01-09 00:57:56.252222 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-09 00:57:56.252229 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-09 00:57:56.252235 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-09 00:57:56.252242 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-09 00:57:56.252248 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-09 00:57:56.252254 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-09 00:57:56.252260 | orchestrator | 2026-01-09 00:57:56.252267 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-09 00:57:56.252273 | orchestrator | Friday 09 January 2026 00:53:35 +0000 (0:00:01.607) 0:07:29.782 ******** 2026-01-09 00:57:56.252278 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.252288 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-09 00:57:56.252291 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-09 00:57:56.252295 | orchestrator | 2026-01-09 00:57:56.252299 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-09 00:57:56.252303 | orchestrator | Friday 09 January 2026 00:53:37 +0000 (0:00:02.305) 0:07:32.088 ******** 2026-01-09 00:57:56.252307 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-09 00:57:56.252311 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-09 00:57:56.252315 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.252319 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-09 00:57:56.252322 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-09 00:57:56.252326 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.252330 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-09 00:57:56.252334 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-09 00:57:56.252338 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.252341 | orchestrator | 2026-01-09 00:57:56.252345 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-09 00:57:56.252349 | orchestrator | Friday 09 January 2026 00:53:39 +0000 (0:00:01.356) 0:07:33.445 ******** 2026-01-09 00:57:56.252353 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-09 00:57:56.252357 | orchestrator | 2026-01-09 00:57:56.252361 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-09 00:57:56.252365 | orchestrator | Friday 09 January 2026 00:53:41 +0000 (0:00:02.290) 0:07:35.735 ******** 2026-01-09 00:57:56.252368 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.252372 | orchestrator | 2026-01-09 00:57:56.252376 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-09 00:57:56.252380 | orchestrator | Friday 09 January 2026 00:53:41 +0000 (0:00:00.560) 0:07:36.296 ******** 2026-01-09 00:57:56.252384 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8cf949ba-669c-5e80-aece-22faa35a4e96', 'data_vg': 'ceph-8cf949ba-669c-5e80-aece-22faa35a4e96'}) 2026-01-09 00:57:56.252393 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-11533966-1bdf-5daf-a468-949db0b9bc1b', 'data_vg': 'ceph-11533966-1bdf-5daf-a468-949db0b9bc1b'}) 2026-01-09 00:57:56.252400 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2edbad7c-3e58-5742-8752-3a5bd5d561b5', 'data_vg': 'ceph-2edbad7c-3e58-5742-8752-3a5bd5d561b5'}) 2026-01-09 00:57:56.252404 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f', 'data_vg': 'ceph-aa3bcdda-c0e8-51aa-8164-bd5963cdd10f'}) 2026-01-09 00:57:56.252408 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-209c90a3-928e-55d9-9ec8-b900c012dcc3', 'data_vg': 'ceph-209c90a3-928e-55d9-9ec8-b900c012dcc3'}) 2026-01-09 00:57:56.252427 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-827da1a7-5d25-503a-baf6-83b57b40e5ca', 'data_vg': 'ceph-827da1a7-5d25-503a-baf6-83b57b40e5ca'}) 2026-01-09 00:57:56.252431 | orchestrator | 2026-01-09 00:57:56.252435 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-09 00:57:56.252439 | orchestrator | Friday 09 January 2026 00:54:21 +0000 (0:00:39.301) 0:08:15.598 ******** 2026-01-09 00:57:56.252443 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252446 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.252450 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.252454 | orchestrator | 2026-01-09 00:57:56.252458 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-09 00:57:56.252462 | orchestrator | Friday 09 January 2026 00:54:21 +0000 (0:00:00.354) 0:08:15.952 ******** 2026-01-09 00:57:56.252465 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.252472 | orchestrator | 2026-01-09 00:57:56.252476 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-09 00:57:56.252480 | orchestrator | Friday 09 January 2026 00:54:22 +0000 (0:00:00.530) 0:08:16.482 ******** 2026-01-09 00:57:56.252484 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.252487 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.252491 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.252495 | orchestrator | 2026-01-09 00:57:56.252499 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-09 00:57:56.252503 | orchestrator | Friday 09 January 2026 00:54:23 +0000 (0:00:01.057) 0:08:17.540 ******** 2026-01-09 00:57:56.252506 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.252510 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.252514 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.252517 | orchestrator | 2026-01-09 00:57:56.252521 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-09 00:57:56.252525 | orchestrator | Friday 09 January 2026 00:54:26 +0000 (0:00:02.971) 0:08:20.511 ******** 2026-01-09 00:57:56.252529 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.252533 | orchestrator | 2026-01-09 00:57:56.252536 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-09 00:57:56.252540 | orchestrator | Friday 09 January 2026 00:54:26 +0000 (0:00:00.556) 0:08:21.068 ******** 2026-01-09 00:57:56.252544 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.252548 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.252551 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.252555 | orchestrator | 2026-01-09 00:57:56.252559 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-09 00:57:56.252563 | orchestrator | Friday 09 January 2026 00:54:28 +0000 (0:00:01.664) 0:08:22.732 ******** 2026-01-09 00:57:56.252566 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.252570 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.252574 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.252578 | orchestrator | 2026-01-09 00:57:56.252582 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-09 00:57:56.252585 | orchestrator | Friday 09 January 2026 00:54:29 +0000 (0:00:01.205) 0:08:23.937 ******** 2026-01-09 00:57:56.252589 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.252593 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.252597 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.252600 | orchestrator | 2026-01-09 00:57:56.252604 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-09 00:57:56.252608 | orchestrator | Friday 09 January 2026 00:54:31 +0000 (0:00:02.221) 0:08:26.158 ******** 2026-01-09 00:57:56.252612 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252616 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.252619 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.252623 | orchestrator | 2026-01-09 00:57:56.252627 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-09 00:57:56.252631 | orchestrator | Friday 09 January 2026 00:54:32 +0000 (0:00:00.309) 0:08:26.468 ******** 2026-01-09 00:57:56.252634 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252638 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.252642 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.252645 | orchestrator | 2026-01-09 00:57:56.252649 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-09 00:57:56.252653 | orchestrator | Friday 09 January 2026 00:54:32 +0000 (0:00:00.633) 0:08:27.101 ******** 2026-01-09 00:57:56.252657 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-01-09 00:57:56.252661 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-09 00:57:56.252664 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-09 00:57:56.252668 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-01-09 00:57:56.252676 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-01-09 00:57:56.252679 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-09 00:57:56.252683 | orchestrator | 2026-01-09 00:57:56.252687 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-09 00:57:56.252690 | orchestrator | Friday 09 January 2026 00:54:33 +0000 (0:00:01.093) 0:08:28.195 ******** 2026-01-09 00:57:56.252694 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-09 00:57:56.252698 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-09 00:57:56.252705 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-09 00:57:56.252712 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-09 00:57:56.252715 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-09 00:57:56.252719 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-09 00:57:56.252723 | orchestrator | 2026-01-09 00:57:56.252727 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-09 00:57:56.252731 | orchestrator | Friday 09 January 2026 00:54:35 +0000 (0:00:02.098) 0:08:30.293 ******** 2026-01-09 00:57:56.252734 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-09 00:57:56.252738 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-09 00:57:56.252742 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-09 00:57:56.252745 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-09 00:57:56.252749 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-09 00:57:56.252753 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-09 00:57:56.252757 | orchestrator | 2026-01-09 00:57:56.252760 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-09 00:57:56.252764 | orchestrator | Friday 09 January 2026 00:54:40 +0000 (0:00:04.466) 0:08:34.760 ******** 2026-01-09 00:57:56.252768 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252772 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.252775 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-09 00:57:56.252779 | orchestrator | 2026-01-09 00:57:56.252783 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-09 00:57:56.252786 | orchestrator | Friday 09 January 2026 00:54:43 +0000 (0:00:03.121) 0:08:37.881 ******** 2026-01-09 00:57:56.252790 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252794 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.252798 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-09 00:57:56.252801 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-09 00:57:56.252806 | orchestrator | 2026-01-09 00:57:56.252809 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-09 00:57:56.252813 | orchestrator | Friday 09 January 2026 00:54:55 +0000 (0:00:12.304) 0:08:50.186 ******** 2026-01-09 00:57:56.252817 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252821 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.252825 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.252829 | orchestrator | 2026-01-09 00:57:56.252832 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-09 00:57:56.252836 | orchestrator | Friday 09 January 2026 00:54:56 +0000 (0:00:01.062) 0:08:51.248 ******** 2026-01-09 00:57:56.252840 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252843 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.252847 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.252851 | orchestrator | 2026-01-09 00:57:56.252855 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-09 00:57:56.252860 | orchestrator | Friday 09 January 2026 00:54:57 +0000 (0:00:00.369) 0:08:51.617 ******** 2026-01-09 00:57:56.252866 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.252872 | orchestrator | 2026-01-09 00:57:56.252877 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-09 00:57:56.252891 | orchestrator | Friday 09 January 2026 00:54:57 +0000 (0:00:00.545) 0:08:52.163 ******** 2026-01-09 00:57:56.252900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.252905 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.252911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.252918 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252924 | orchestrator | 2026-01-09 00:57:56.252930 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-09 00:57:56.252936 | orchestrator | Friday 09 January 2026 00:54:58 +0000 (0:00:00.972) 0:08:53.136 ******** 2026-01-09 00:57:56.252941 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252947 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.252953 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.252958 | orchestrator | 2026-01-09 00:57:56.252964 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-09 00:57:56.252969 | orchestrator | Friday 09 January 2026 00:54:59 +0000 (0:00:00.318) 0:08:53.455 ******** 2026-01-09 00:57:56.252975 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.252980 | orchestrator | 2026-01-09 00:57:56.252986 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-09 00:57:56.252992 | orchestrator | Friday 09 January 2026 00:54:59 +0000 (0:00:00.252) 0:08:53.707 ******** 2026-01-09 00:57:56.252998 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253004 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.253010 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.253016 | orchestrator | 2026-01-09 00:57:56.253021 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-09 00:57:56.253027 | orchestrator | Friday 09 January 2026 00:54:59 +0000 (0:00:00.303) 0:08:54.010 ******** 2026-01-09 00:57:56.253033 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253039 | orchestrator | 2026-01-09 00:57:56.253046 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-09 00:57:56.253052 | orchestrator | Friday 09 January 2026 00:54:59 +0000 (0:00:00.211) 0:08:54.221 ******** 2026-01-09 00:57:56.253058 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253065 | orchestrator | 2026-01-09 00:57:56.253071 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-09 00:57:56.253078 | orchestrator | Friday 09 January 2026 00:55:00 +0000 (0:00:00.255) 0:08:54.477 ******** 2026-01-09 00:57:56.253084 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253090 | orchestrator | 2026-01-09 00:57:56.253094 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-09 00:57:56.253102 | orchestrator | Friday 09 January 2026 00:55:00 +0000 (0:00:00.153) 0:08:54.631 ******** 2026-01-09 00:57:56.253106 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253110 | orchestrator | 2026-01-09 00:57:56.253118 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-09 00:57:56.253122 | orchestrator | Friday 09 January 2026 00:55:00 +0000 (0:00:00.232) 0:08:54.864 ******** 2026-01-09 00:57:56.253126 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253129 | orchestrator | 2026-01-09 00:57:56.253133 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-09 00:57:56.253137 | orchestrator | Friday 09 January 2026 00:55:01 +0000 (0:00:00.921) 0:08:55.785 ******** 2026-01-09 00:57:56.253141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.253144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.253148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.253152 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253156 | orchestrator | 2026-01-09 00:57:56.253159 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-09 00:57:56.253163 | orchestrator | Friday 09 January 2026 00:55:01 +0000 (0:00:00.402) 0:08:56.188 ******** 2026-01-09 00:57:56.253173 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253177 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.253181 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.253185 | orchestrator | 2026-01-09 00:57:56.253188 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-09 00:57:56.253192 | orchestrator | Friday 09 January 2026 00:55:02 +0000 (0:00:00.334) 0:08:56.522 ******** 2026-01-09 00:57:56.253196 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253200 | orchestrator | 2026-01-09 00:57:56.253203 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-09 00:57:56.253207 | orchestrator | Friday 09 January 2026 00:55:02 +0000 (0:00:00.222) 0:08:56.745 ******** 2026-01-09 00:57:56.253211 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253215 | orchestrator | 2026-01-09 00:57:56.253219 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-09 00:57:56.253222 | orchestrator | 2026-01-09 00:57:56.253226 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-09 00:57:56.253230 | orchestrator | Friday 09 January 2026 00:55:03 +0000 (0:00:00.931) 0:08:57.676 ******** 2026-01-09 00:57:56.253234 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.253239 | orchestrator | 2026-01-09 00:57:56.253243 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-09 00:57:56.253247 | orchestrator | Friday 09 January 2026 00:55:04 +0000 (0:00:01.222) 0:08:58.899 ******** 2026-01-09 00:57:56.253251 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.253255 | orchestrator | 2026-01-09 00:57:56.253259 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-09 00:57:56.253262 | orchestrator | Friday 09 January 2026 00:55:05 +0000 (0:00:01.124) 0:09:00.023 ******** 2026-01-09 00:57:56.253266 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253270 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.253274 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.253277 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.253281 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.253285 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.253289 | orchestrator | 2026-01-09 00:57:56.253293 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-09 00:57:56.253296 | orchestrator | Friday 09 January 2026 00:55:07 +0000 (0:00:01.538) 0:09:01.562 ******** 2026-01-09 00:57:56.253300 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.253304 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.253308 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.253312 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.253315 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.253319 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.253323 | orchestrator | 2026-01-09 00:57:56.253326 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-09 00:57:56.253330 | orchestrator | Friday 09 January 2026 00:55:08 +0000 (0:00:00.802) 0:09:02.365 ******** 2026-01-09 00:57:56.253334 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.253338 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.253341 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.253345 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.253349 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.253353 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.253356 | orchestrator | 2026-01-09 00:57:56.253360 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-09 00:57:56.253364 | orchestrator | Friday 09 January 2026 00:55:09 +0000 (0:00:01.054) 0:09:03.419 ******** 2026-01-09 00:57:56.253371 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.253375 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.253379 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.253383 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.253386 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.253390 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.253394 | orchestrator | 2026-01-09 00:57:56.253398 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-09 00:57:56.253401 | orchestrator | Friday 09 January 2026 00:55:09 +0000 (0:00:00.779) 0:09:04.199 ******** 2026-01-09 00:57:56.253405 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253409 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.253447 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.253452 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.253455 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.253459 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.253463 | orchestrator | 2026-01-09 00:57:56.253467 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-09 00:57:56.253473 | orchestrator | Friday 09 January 2026 00:55:11 +0000 (0:00:01.398) 0:09:05.597 ******** 2026-01-09 00:57:56.253477 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253484 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.253488 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.253492 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.253496 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.253499 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.253503 | orchestrator | 2026-01-09 00:57:56.253507 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-09 00:57:56.253510 | orchestrator | Friday 09 January 2026 00:55:11 +0000 (0:00:00.573) 0:09:06.171 ******** 2026-01-09 00:57:56.253514 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253518 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.253522 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.253525 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.253529 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.253533 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.253536 | orchestrator | 2026-01-09 00:57:56.253540 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-09 00:57:56.253544 | orchestrator | Friday 09 January 2026 00:55:12 +0000 (0:00:00.989) 0:09:07.160 ******** 2026-01-09 00:57:56.253548 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.253551 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.253555 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.253559 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.253562 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.253566 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.253570 | orchestrator | 2026-01-09 00:57:56.253573 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-09 00:57:56.253577 | orchestrator | Friday 09 January 2026 00:55:14 +0000 (0:00:01.183) 0:09:08.344 ******** 2026-01-09 00:57:56.253581 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.253585 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.253588 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.253592 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.253596 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.253599 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.253603 | orchestrator | 2026-01-09 00:57:56.253607 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-09 00:57:56.253611 | orchestrator | Friday 09 January 2026 00:55:15 +0000 (0:00:01.368) 0:09:09.712 ******** 2026-01-09 00:57:56.253614 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253618 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.253622 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.253626 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.253633 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.253637 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.253641 | orchestrator | 2026-01-09 00:57:56.253645 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-09 00:57:56.253648 | orchestrator | Friday 09 January 2026 00:55:15 +0000 (0:00:00.589) 0:09:10.302 ******** 2026-01-09 00:57:56.253652 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253656 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.253660 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.253663 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.253667 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.253671 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.253674 | orchestrator | 2026-01-09 00:57:56.253678 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-09 00:57:56.253682 | orchestrator | Friday 09 January 2026 00:55:16 +0000 (0:00:00.928) 0:09:11.230 ******** 2026-01-09 00:57:56.253686 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.253689 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.253693 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.253697 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.253701 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.253704 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.253708 | orchestrator | 2026-01-09 00:57:56.253712 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-09 00:57:56.253716 | orchestrator | Friday 09 January 2026 00:55:17 +0000 (0:00:00.685) 0:09:11.916 ******** 2026-01-09 00:57:56.253719 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.253723 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.253727 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.253731 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.253734 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.253738 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.253742 | orchestrator | 2026-01-09 00:57:56.253746 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-09 00:57:56.253750 | orchestrator | Friday 09 January 2026 00:55:18 +0000 (0:00:00.852) 0:09:12.768 ******** 2026-01-09 00:57:56.253754 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.253757 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.253761 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.253765 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.253768 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.253772 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.253776 | orchestrator | 2026-01-09 00:57:56.253780 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-09 00:57:56.253783 | orchestrator | Friday 09 January 2026 00:55:19 +0000 (0:00:00.611) 0:09:13.380 ******** 2026-01-09 00:57:56.253787 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253791 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.253795 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.253798 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.253802 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.253806 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.253810 | orchestrator | 2026-01-09 00:57:56.253813 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-09 00:57:56.253817 | orchestrator | Friday 09 January 2026 00:55:19 +0000 (0:00:00.890) 0:09:14.271 ******** 2026-01-09 00:57:56.253821 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253825 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.253828 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.253832 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:57:56.253836 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:57:56.253839 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:57:56.253843 | orchestrator | 2026-01-09 00:57:56.253847 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-09 00:57:56.253860 | orchestrator | Friday 09 January 2026 00:55:20 +0000 (0:00:00.637) 0:09:14.908 ******** 2026-01-09 00:57:56.253864 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.253868 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.253872 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.253875 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.253879 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.253883 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.253886 | orchestrator | 2026-01-09 00:57:56.253890 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-09 00:57:56.253894 | orchestrator | Friday 09 January 2026 00:55:21 +0000 (0:00:00.925) 0:09:15.833 ******** 2026-01-09 00:57:56.253898 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.253901 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.253905 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.253909 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.253912 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.253916 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.253920 | orchestrator | 2026-01-09 00:57:56.253924 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-09 00:57:56.253927 | orchestrator | Friday 09 January 2026 00:55:22 +0000 (0:00:00.640) 0:09:16.474 ******** 2026-01-09 00:57:56.253931 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.253935 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.253938 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.253942 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.253946 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.253949 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.253953 | orchestrator | 2026-01-09 00:57:56.253957 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-09 00:57:56.253961 | orchestrator | Friday 09 January 2026 00:55:23 +0000 (0:00:01.386) 0:09:17.861 ******** 2026-01-09 00:57:56.253964 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-09 00:57:56.253968 | orchestrator | 2026-01-09 00:57:56.253972 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-09 00:57:56.253976 | orchestrator | Friday 09 January 2026 00:55:27 +0000 (0:00:04.210) 0:09:22.071 ******** 2026-01-09 00:57:56.253980 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-09 00:57:56.253983 | orchestrator | 2026-01-09 00:57:56.253987 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-09 00:57:56.253991 | orchestrator | Friday 09 January 2026 00:55:29 +0000 (0:00:02.099) 0:09:24.171 ******** 2026-01-09 00:57:56.253995 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.253998 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.254002 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.254006 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.254010 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.254041 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.254045 | orchestrator | 2026-01-09 00:57:56.254048 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-09 00:57:56.254052 | orchestrator | Friday 09 January 2026 00:55:31 +0000 (0:00:02.067) 0:09:26.238 ******** 2026-01-09 00:57:56.254056 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.254060 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.254063 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.254067 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.254071 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.254074 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.254078 | orchestrator | 2026-01-09 00:57:56.254082 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-09 00:57:56.254086 | orchestrator | Friday 09 January 2026 00:55:33 +0000 (0:00:01.151) 0:09:27.389 ******** 2026-01-09 00:57:56.254090 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.254099 | orchestrator | 2026-01-09 00:57:56.254102 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-09 00:57:56.254106 | orchestrator | Friday 09 January 2026 00:55:34 +0000 (0:00:01.377) 0:09:28.767 ******** 2026-01-09 00:57:56.254110 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.254114 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.254117 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.254121 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.254125 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.254128 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.254132 | orchestrator | 2026-01-09 00:57:56.254136 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-09 00:57:56.254140 | orchestrator | Friday 09 January 2026 00:55:36 +0000 (0:00:02.335) 0:09:31.103 ******** 2026-01-09 00:57:56.254143 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.254147 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.254151 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.254154 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.254158 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.254162 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.254165 | orchestrator | 2026-01-09 00:57:56.254169 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-09 00:57:56.254173 | orchestrator | Friday 09 January 2026 00:55:40 +0000 (0:00:03.659) 0:09:34.763 ******** 2026-01-09 00:57:56.254177 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-4, testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:57:56.254181 | orchestrator | 2026-01-09 00:57:56.254185 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-09 00:57:56.254188 | orchestrator | Friday 09 January 2026 00:55:41 +0000 (0:00:01.482) 0:09:36.245 ******** 2026-01-09 00:57:56.254192 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254196 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254200 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254203 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.254207 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.254211 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.254215 | orchestrator | 2026-01-09 00:57:56.254218 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-09 00:57:56.254228 | orchestrator | Friday 09 January 2026 00:55:42 +0000 (0:00:00.974) 0:09:37.220 ******** 2026-01-09 00:57:56.254232 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.254236 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.254240 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.254243 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:57:56.254247 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:57:56.254251 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:57:56.254255 | orchestrator | 2026-01-09 00:57:56.254258 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-09 00:57:56.254262 | orchestrator | Friday 09 January 2026 00:55:45 +0000 (0:00:02.530) 0:09:39.751 ******** 2026-01-09 00:57:56.254266 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254270 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254273 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254277 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:57:56.254281 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:57:56.254284 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:57:56.254288 | orchestrator | 2026-01-09 00:57:56.254292 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-09 00:57:56.254296 | orchestrator | 2026-01-09 00:57:56.254299 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-09 00:57:56.254303 | orchestrator | Friday 09 January 2026 00:55:46 +0000 (0:00:01.180) 0:09:40.932 ******** 2026-01-09 00:57:56.254311 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.254315 | orchestrator | 2026-01-09 00:57:56.254319 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-09 00:57:56.254322 | orchestrator | Friday 09 January 2026 00:55:47 +0000 (0:00:00.605) 0:09:41.537 ******** 2026-01-09 00:57:56.254326 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.254330 | orchestrator | 2026-01-09 00:57:56.254334 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-09 00:57:56.254337 | orchestrator | Friday 09 January 2026 00:55:48 +0000 (0:00:00.778) 0:09:42.316 ******** 2026-01-09 00:57:56.254341 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.254345 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.254351 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.254357 | orchestrator | 2026-01-09 00:57:56.254367 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-09 00:57:56.254377 | orchestrator | Friday 09 January 2026 00:55:48 +0000 (0:00:00.362) 0:09:42.678 ******** 2026-01-09 00:57:56.254383 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254388 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254394 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254400 | orchestrator | 2026-01-09 00:57:56.254406 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-09 00:57:56.254427 | orchestrator | Friday 09 January 2026 00:55:49 +0000 (0:00:00.762) 0:09:43.441 ******** 2026-01-09 00:57:56.254434 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254440 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254445 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254451 | orchestrator | 2026-01-09 00:57:56.254457 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-09 00:57:56.254463 | orchestrator | Friday 09 January 2026 00:55:50 +0000 (0:00:01.166) 0:09:44.608 ******** 2026-01-09 00:57:56.254469 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254475 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254480 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254486 | orchestrator | 2026-01-09 00:57:56.254492 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-09 00:57:56.254498 | orchestrator | Friday 09 January 2026 00:55:51 +0000 (0:00:00.995) 0:09:45.603 ******** 2026-01-09 00:57:56.254504 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.254510 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.254516 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.254522 | orchestrator | 2026-01-09 00:57:56.254528 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-09 00:57:56.254534 | orchestrator | Friday 09 January 2026 00:55:51 +0000 (0:00:00.552) 0:09:46.156 ******** 2026-01-09 00:57:56.254540 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.254544 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.254548 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.254552 | orchestrator | 2026-01-09 00:57:56.254556 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-09 00:57:56.254560 | orchestrator | Friday 09 January 2026 00:55:52 +0000 (0:00:00.517) 0:09:46.673 ******** 2026-01-09 00:57:56.254563 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.254567 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.254571 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.254575 | orchestrator | 2026-01-09 00:57:56.254579 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-09 00:57:56.254582 | orchestrator | Friday 09 January 2026 00:55:52 +0000 (0:00:00.626) 0:09:47.300 ******** 2026-01-09 00:57:56.254586 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254590 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254594 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254602 | orchestrator | 2026-01-09 00:57:56.254606 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-09 00:57:56.254610 | orchestrator | Friday 09 January 2026 00:55:53 +0000 (0:00:00.893) 0:09:48.193 ******** 2026-01-09 00:57:56.254614 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254617 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254621 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254625 | orchestrator | 2026-01-09 00:57:56.254629 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-09 00:57:56.254632 | orchestrator | Friday 09 January 2026 00:55:54 +0000 (0:00:00.890) 0:09:49.084 ******** 2026-01-09 00:57:56.254636 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.254640 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.254644 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.254648 | orchestrator | 2026-01-09 00:57:56.254652 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-09 00:57:56.254663 | orchestrator | Friday 09 January 2026 00:55:55 +0000 (0:00:00.527) 0:09:49.612 ******** 2026-01-09 00:57:56.254667 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.254671 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.254674 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.254678 | orchestrator | 2026-01-09 00:57:56.254682 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-09 00:57:56.254686 | orchestrator | Friday 09 January 2026 00:55:56 +0000 (0:00:00.797) 0:09:50.409 ******** 2026-01-09 00:57:56.254690 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254693 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254697 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254701 | orchestrator | 2026-01-09 00:57:56.254705 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-09 00:57:56.254709 | orchestrator | Friday 09 January 2026 00:55:56 +0000 (0:00:00.370) 0:09:50.780 ******** 2026-01-09 00:57:56.254712 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254716 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254720 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254724 | orchestrator | 2026-01-09 00:57:56.254727 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-09 00:57:56.254731 | orchestrator | Friday 09 January 2026 00:55:56 +0000 (0:00:00.503) 0:09:51.283 ******** 2026-01-09 00:57:56.254735 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254739 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254742 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254746 | orchestrator | 2026-01-09 00:57:56.254750 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-09 00:57:56.254754 | orchestrator | Friday 09 January 2026 00:55:57 +0000 (0:00:00.612) 0:09:51.896 ******** 2026-01-09 00:57:56.254758 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.254761 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.254765 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.254769 | orchestrator | 2026-01-09 00:57:56.254773 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-09 00:57:56.254777 | orchestrator | Friday 09 January 2026 00:55:58 +0000 (0:00:01.231) 0:09:53.127 ******** 2026-01-09 00:57:56.254780 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.254784 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.254788 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.254792 | orchestrator | 2026-01-09 00:57:56.254795 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-09 00:57:56.254799 | orchestrator | Friday 09 January 2026 00:55:59 +0000 (0:00:00.415) 0:09:53.543 ******** 2026-01-09 00:57:56.254803 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.254807 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.254810 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.254814 | orchestrator | 2026-01-09 00:57:56.254818 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-09 00:57:56.254826 | orchestrator | Friday 09 January 2026 00:55:59 +0000 (0:00:00.334) 0:09:53.878 ******** 2026-01-09 00:57:56.254830 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254834 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254837 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254841 | orchestrator | 2026-01-09 00:57:56.254845 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-09 00:57:56.254849 | orchestrator | Friday 09 January 2026 00:55:59 +0000 (0:00:00.351) 0:09:54.229 ******** 2026-01-09 00:57:56.254853 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.254857 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.254860 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.254864 | orchestrator | 2026-01-09 00:57:56.254868 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-09 00:57:56.254872 | orchestrator | Friday 09 January 2026 00:56:01 +0000 (0:00:01.312) 0:09:55.541 ******** 2026-01-09 00:57:56.254876 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.254879 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.254883 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-09 00:57:56.254887 | orchestrator | 2026-01-09 00:57:56.254891 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-09 00:57:56.254895 | orchestrator | Friday 09 January 2026 00:56:01 +0000 (0:00:00.422) 0:09:55.964 ******** 2026-01-09 00:57:56.254899 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-09 00:57:56.254902 | orchestrator | 2026-01-09 00:57:56.254906 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-09 00:57:56.254910 | orchestrator | Friday 09 January 2026 00:56:03 +0000 (0:00:02.181) 0:09:58.146 ******** 2026-01-09 00:57:56.254916 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-09 00:57:56.254922 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.254926 | orchestrator | 2026-01-09 00:57:56.254930 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-09 00:57:56.254934 | orchestrator | Friday 09 January 2026 00:56:04 +0000 (0:00:00.207) 0:09:58.354 ******** 2026-01-09 00:57:56.254939 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-09 00:57:56.254949 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-09 00:57:56.254953 | orchestrator | 2026-01-09 00:57:56.254960 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-09 00:57:56.254967 | orchestrator | Friday 09 January 2026 00:56:13 +0000 (0:00:09.083) 0:10:07.438 ******** 2026-01-09 00:57:56.254971 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-09 00:57:56.254975 | orchestrator | 2026-01-09 00:57:56.254979 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-09 00:57:56.254982 | orchestrator | Friday 09 January 2026 00:56:16 +0000 (0:00:03.711) 0:10:11.149 ******** 2026-01-09 00:57:56.254986 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.254990 | orchestrator | 2026-01-09 00:57:56.254994 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-09 00:57:56.254998 | orchestrator | Friday 09 January 2026 00:56:17 +0000 (0:00:00.571) 0:10:11.721 ******** 2026-01-09 00:57:56.255002 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-09 00:57:56.255009 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-09 00:57:56.255013 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-09 00:57:56.255017 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-09 00:57:56.255020 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-09 00:57:56.255024 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-09 00:57:56.255028 | orchestrator | 2026-01-09 00:57:56.255032 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-09 00:57:56.255035 | orchestrator | Friday 09 January 2026 00:56:18 +0000 (0:00:01.087) 0:10:12.809 ******** 2026-01-09 00:57:56.255039 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.255043 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-09 00:57:56.255047 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-09 00:57:56.255051 | orchestrator | 2026-01-09 00:57:56.255054 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-09 00:57:56.255058 | orchestrator | Friday 09 January 2026 00:56:21 +0000 (0:00:02.538) 0:10:15.347 ******** 2026-01-09 00:57:56.255062 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-09 00:57:56.255066 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-09 00:57:56.255070 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.255074 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-09 00:57:56.255077 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-09 00:57:56.255081 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.255085 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-09 00:57:56.255089 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-09 00:57:56.255092 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.255096 | orchestrator | 2026-01-09 00:57:56.255100 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-09 00:57:56.255104 | orchestrator | Friday 09 January 2026 00:56:22 +0000 (0:00:01.643) 0:10:16.991 ******** 2026-01-09 00:57:56.255108 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.255111 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.255115 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.255119 | orchestrator | 2026-01-09 00:57:56.255123 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-09 00:57:56.255126 | orchestrator | Friday 09 January 2026 00:56:25 +0000 (0:00:02.674) 0:10:19.665 ******** 2026-01-09 00:57:56.255130 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.255134 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.255138 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.255142 | orchestrator | 2026-01-09 00:57:56.255145 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-09 00:57:56.255149 | orchestrator | Friday 09 January 2026 00:56:25 +0000 (0:00:00.309) 0:10:19.975 ******** 2026-01-09 00:57:56.255153 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.255157 | orchestrator | 2026-01-09 00:57:56.255161 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-09 00:57:56.255165 | orchestrator | Friday 09 January 2026 00:56:26 +0000 (0:00:00.817) 0:10:20.793 ******** 2026-01-09 00:57:56.255168 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.255172 | orchestrator | 2026-01-09 00:57:56.255176 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-09 00:57:56.255180 | orchestrator | Friday 09 January 2026 00:56:27 +0000 (0:00:00.554) 0:10:21.347 ******** 2026-01-09 00:57:56.255183 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.255191 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.255194 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.255198 | orchestrator | 2026-01-09 00:57:56.255202 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-09 00:57:56.255206 | orchestrator | Friday 09 January 2026 00:56:28 +0000 (0:00:01.284) 0:10:22.631 ******** 2026-01-09 00:57:56.255210 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.255213 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.255217 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.255221 | orchestrator | 2026-01-09 00:57:56.255225 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-09 00:57:56.255229 | orchestrator | Friday 09 January 2026 00:56:29 +0000 (0:00:01.369) 0:10:24.001 ******** 2026-01-09 00:57:56.255232 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.255236 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.255240 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.255244 | orchestrator | 2026-01-09 00:57:56.255248 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-09 00:57:56.255254 | orchestrator | Friday 09 January 2026 00:56:31 +0000 (0:00:01.931) 0:10:25.933 ******** 2026-01-09 00:57:56.255262 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.255266 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.255270 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.255273 | orchestrator | 2026-01-09 00:57:56.255277 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-09 00:57:56.255281 | orchestrator | Friday 09 January 2026 00:56:33 +0000 (0:00:02.033) 0:10:27.967 ******** 2026-01-09 00:57:56.255285 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255289 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255292 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255296 | orchestrator | 2026-01-09 00:57:56.255300 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-09 00:57:56.255304 | orchestrator | Friday 09 January 2026 00:56:35 +0000 (0:00:01.654) 0:10:29.621 ******** 2026-01-09 00:57:56.255307 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.255311 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.255315 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.255319 | orchestrator | 2026-01-09 00:57:56.255322 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-09 00:57:56.255326 | orchestrator | Friday 09 January 2026 00:56:35 +0000 (0:00:00.690) 0:10:30.311 ******** 2026-01-09 00:57:56.255330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.255334 | orchestrator | 2026-01-09 00:57:56.255338 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-09 00:57:56.255341 | orchestrator | Friday 09 January 2026 00:56:36 +0000 (0:00:00.802) 0:10:31.114 ******** 2026-01-09 00:57:56.255345 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255349 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255353 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255357 | orchestrator | 2026-01-09 00:57:56.255361 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-09 00:57:56.255364 | orchestrator | Friday 09 January 2026 00:56:37 +0000 (0:00:00.332) 0:10:31.447 ******** 2026-01-09 00:57:56.255368 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.255372 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.255376 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.255379 | orchestrator | 2026-01-09 00:57:56.255383 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-09 00:57:56.255387 | orchestrator | Friday 09 January 2026 00:56:38 +0000 (0:00:01.265) 0:10:32.712 ******** 2026-01-09 00:57:56.255391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.255395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.255398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.255406 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.255409 | orchestrator | 2026-01-09 00:57:56.255428 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-09 00:57:56.255435 | orchestrator | Friday 09 January 2026 00:56:39 +0000 (0:00:00.924) 0:10:33.637 ******** 2026-01-09 00:57:56.255441 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255447 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255453 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255460 | orchestrator | 2026-01-09 00:57:56.255466 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-09 00:57:56.255472 | orchestrator | 2026-01-09 00:57:56.255479 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-09 00:57:56.255484 | orchestrator | Friday 09 January 2026 00:56:40 +0000 (0:00:00.886) 0:10:34.523 ******** 2026-01-09 00:57:56.255487 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.255491 | orchestrator | 2026-01-09 00:57:56.255495 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-09 00:57:56.255499 | orchestrator | Friday 09 January 2026 00:56:40 +0000 (0:00:00.552) 0:10:35.076 ******** 2026-01-09 00:57:56.255503 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.255506 | orchestrator | 2026-01-09 00:57:56.255510 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-09 00:57:56.255514 | orchestrator | Friday 09 January 2026 00:56:41 +0000 (0:00:00.822) 0:10:35.899 ******** 2026-01-09 00:57:56.255518 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.255521 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.255525 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.255529 | orchestrator | 2026-01-09 00:57:56.255533 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-09 00:57:56.255536 | orchestrator | Friday 09 January 2026 00:56:41 +0000 (0:00:00.352) 0:10:36.251 ******** 2026-01-09 00:57:56.255540 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255544 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255548 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255552 | orchestrator | 2026-01-09 00:57:56.255555 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-09 00:57:56.255559 | orchestrator | Friday 09 January 2026 00:56:42 +0000 (0:00:00.907) 0:10:37.159 ******** 2026-01-09 00:57:56.255563 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255567 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255570 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255574 | orchestrator | 2026-01-09 00:57:56.255578 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-09 00:57:56.255582 | orchestrator | Friday 09 January 2026 00:56:44 +0000 (0:00:01.577) 0:10:38.737 ******** 2026-01-09 00:57:56.255585 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255589 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255593 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255597 | orchestrator | 2026-01-09 00:57:56.255600 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-09 00:57:56.255604 | orchestrator | Friday 09 January 2026 00:56:45 +0000 (0:00:00.778) 0:10:39.516 ******** 2026-01-09 00:57:56.255608 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.255615 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.255622 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.255626 | orchestrator | 2026-01-09 00:57:56.255630 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-09 00:57:56.255633 | orchestrator | Friday 09 January 2026 00:56:45 +0000 (0:00:00.400) 0:10:39.917 ******** 2026-01-09 00:57:56.255637 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.255641 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.255648 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.255652 | orchestrator | 2026-01-09 00:57:56.255656 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-09 00:57:56.255660 | orchestrator | Friday 09 January 2026 00:56:45 +0000 (0:00:00.343) 0:10:40.261 ******** 2026-01-09 00:57:56.255664 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.255667 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.255671 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.255675 | orchestrator | 2026-01-09 00:57:56.255679 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-09 00:57:56.255683 | orchestrator | Friday 09 January 2026 00:56:46 +0000 (0:00:00.682) 0:10:40.943 ******** 2026-01-09 00:57:56.255687 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255690 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255694 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255698 | orchestrator | 2026-01-09 00:57:56.255702 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-09 00:57:56.255705 | orchestrator | Friday 09 January 2026 00:56:47 +0000 (0:00:00.760) 0:10:41.704 ******** 2026-01-09 00:57:56.255709 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255713 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255717 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255721 | orchestrator | 2026-01-09 00:57:56.255724 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-09 00:57:56.255728 | orchestrator | Friday 09 January 2026 00:56:48 +0000 (0:00:00.743) 0:10:42.447 ******** 2026-01-09 00:57:56.255732 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.255736 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.255740 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.255743 | orchestrator | 2026-01-09 00:57:56.255747 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-09 00:57:56.255751 | orchestrator | Friday 09 January 2026 00:56:48 +0000 (0:00:00.419) 0:10:42.867 ******** 2026-01-09 00:57:56.255755 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.255758 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.255762 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.255766 | orchestrator | 2026-01-09 00:57:56.255770 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-09 00:57:56.255774 | orchestrator | Friday 09 January 2026 00:56:49 +0000 (0:00:00.627) 0:10:43.495 ******** 2026-01-09 00:57:56.255777 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255781 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255785 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255789 | orchestrator | 2026-01-09 00:57:56.255792 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-09 00:57:56.255796 | orchestrator | Friday 09 January 2026 00:56:49 +0000 (0:00:00.370) 0:10:43.865 ******** 2026-01-09 00:57:56.255800 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255804 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255807 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255811 | orchestrator | 2026-01-09 00:57:56.255815 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-09 00:57:56.255819 | orchestrator | Friday 09 January 2026 00:56:49 +0000 (0:00:00.393) 0:10:44.258 ******** 2026-01-09 00:57:56.255823 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255826 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255830 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255834 | orchestrator | 2026-01-09 00:57:56.255838 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-09 00:57:56.255841 | orchestrator | Friday 09 January 2026 00:56:50 +0000 (0:00:00.365) 0:10:44.624 ******** 2026-01-09 00:57:56.255845 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.255849 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.255853 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.255857 | orchestrator | 2026-01-09 00:57:56.255863 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-09 00:57:56.255867 | orchestrator | Friday 09 January 2026 00:56:50 +0000 (0:00:00.405) 0:10:45.030 ******** 2026-01-09 00:57:56.255871 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.255875 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.255878 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.255882 | orchestrator | 2026-01-09 00:57:56.255886 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-09 00:57:56.255890 | orchestrator | Friday 09 January 2026 00:56:51 +0000 (0:00:00.687) 0:10:45.718 ******** 2026-01-09 00:57:56.255893 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.255897 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.255901 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.255905 | orchestrator | 2026-01-09 00:57:56.255909 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-09 00:57:56.255912 | orchestrator | Friday 09 January 2026 00:56:51 +0000 (0:00:00.329) 0:10:46.048 ******** 2026-01-09 00:57:56.255916 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255920 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255924 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255927 | orchestrator | 2026-01-09 00:57:56.255931 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-09 00:57:56.255935 | orchestrator | Friday 09 January 2026 00:56:52 +0000 (0:00:00.406) 0:10:46.454 ******** 2026-01-09 00:57:56.255939 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.255943 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.255946 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.255950 | orchestrator | 2026-01-09 00:57:56.255954 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-09 00:57:56.255958 | orchestrator | Friday 09 January 2026 00:56:53 +0000 (0:00:00.914) 0:10:47.368 ******** 2026-01-09 00:57:56.255964 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.255968 | orchestrator | 2026-01-09 00:57:56.255974 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-09 00:57:56.255978 | orchestrator | Friday 09 January 2026 00:56:53 +0000 (0:00:00.638) 0:10:48.006 ******** 2026-01-09 00:57:56.255982 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.255986 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-09 00:57:56.255991 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-09 00:57:56.255997 | orchestrator | 2026-01-09 00:57:56.256003 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-09 00:57:56.256009 | orchestrator | Friday 09 January 2026 00:56:55 +0000 (0:00:02.260) 0:10:50.267 ******** 2026-01-09 00:57:56.256014 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-09 00:57:56.256020 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-09 00:57:56.256026 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.256032 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-09 00:57:56.256037 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-09 00:57:56.256044 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.256049 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-09 00:57:56.256055 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-09 00:57:56.256060 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.256065 | orchestrator | 2026-01-09 00:57:56.256070 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-09 00:57:56.256076 | orchestrator | Friday 09 January 2026 00:56:57 +0000 (0:00:01.462) 0:10:51.729 ******** 2026-01-09 00:57:56.256082 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.256088 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.256094 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.256100 | orchestrator | 2026-01-09 00:57:56.256113 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-09 00:57:56.256119 | orchestrator | Friday 09 January 2026 00:56:57 +0000 (0:00:00.415) 0:10:52.145 ******** 2026-01-09 00:57:56.256126 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.256130 | orchestrator | 2026-01-09 00:57:56.256134 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-09 00:57:56.256138 | orchestrator | Friday 09 January 2026 00:56:58 +0000 (0:00:00.545) 0:10:52.690 ******** 2026-01-09 00:57:56.256142 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.256147 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.256151 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.256155 | orchestrator | 2026-01-09 00:57:56.256158 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-09 00:57:56.256162 | orchestrator | Friday 09 January 2026 00:56:59 +0000 (0:00:01.321) 0:10:54.012 ******** 2026-01-09 00:57:56.256166 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.256170 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-09 00:57:56.256174 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.256178 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-09 00:57:56.256181 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.256185 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-09 00:57:56.256189 | orchestrator | 2026-01-09 00:57:56.256193 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-09 00:57:56.256197 | orchestrator | Friday 09 January 2026 00:57:04 +0000 (0:00:04.954) 0:10:58.966 ******** 2026-01-09 00:57:56.256201 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.256204 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-09 00:57:56.256208 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.256212 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-09 00:57:56.256216 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 00:57:56.256220 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-09 00:57:56.256223 | orchestrator | 2026-01-09 00:57:56.256227 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-09 00:57:56.256231 | orchestrator | Friday 09 January 2026 00:57:07 +0000 (0:00:02.841) 0:11:01.808 ******** 2026-01-09 00:57:56.256235 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-09 00:57:56.256239 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.256243 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-09 00:57:56.256246 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.256250 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-09 00:57:56.256254 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.256258 | orchestrator | 2026-01-09 00:57:56.256269 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-09 00:57:56.256273 | orchestrator | Friday 09 January 2026 00:57:08 +0000 (0:00:01.265) 0:11:03.074 ******** 2026-01-09 00:57:56.256281 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-09 00:57:56.256285 | orchestrator | 2026-01-09 00:57:56.256288 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-09 00:57:56.256292 | orchestrator | Friday 09 January 2026 00:57:08 +0000 (0:00:00.227) 0:11:03.301 ******** 2026-01-09 00:57:56.256296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-09 00:57:56.256301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-09 00:57:56.256304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-09 00:57:56.256308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-09 00:57:56.256312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-09 00:57:56.256316 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.256320 | orchestrator | 2026-01-09 00:57:56.256324 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-09 00:57:56.256328 | orchestrator | Friday 09 January 2026 00:57:10 +0000 (0:00:01.121) 0:11:04.423 ******** 2026-01-09 00:57:56.256331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-09 00:57:56.256335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-09 00:57:56.256339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-09 00:57:56.256343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-09 00:57:56.256347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-09 00:57:56.256351 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.256354 | orchestrator | 2026-01-09 00:57:56.256358 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-09 00:57:56.256362 | orchestrator | Friday 09 January 2026 00:57:10 +0000 (0:00:00.631) 0:11:05.054 ******** 2026-01-09 00:57:56.256366 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-09 00:57:56.256370 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-09 00:57:56.256374 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-09 00:57:56.256378 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-09 00:57:56.256381 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-09 00:57:56.256385 | orchestrator | 2026-01-09 00:57:56.256389 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-09 00:57:56.256393 | orchestrator | Friday 09 January 2026 00:57:41 +0000 (0:00:30.484) 0:11:35.539 ******** 2026-01-09 00:57:56.256397 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.256401 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.256404 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.256444 | orchestrator | 2026-01-09 00:57:56.256449 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-09 00:57:56.256453 | orchestrator | Friday 09 January 2026 00:57:41 +0000 (0:00:00.334) 0:11:35.873 ******** 2026-01-09 00:57:56.256457 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.256461 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.256465 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.256468 | orchestrator | 2026-01-09 00:57:56.256472 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-09 00:57:56.256476 | orchestrator | Friday 09 January 2026 00:57:41 +0000 (0:00:00.319) 0:11:36.193 ******** 2026-01-09 00:57:56.256480 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.256484 | orchestrator | 2026-01-09 00:57:56.256487 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-09 00:57:56.256491 | orchestrator | Friday 09 January 2026 00:57:42 +0000 (0:00:00.821) 0:11:37.014 ******** 2026-01-09 00:57:56.256498 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.256502 | orchestrator | 2026-01-09 00:57:56.256506 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-09 00:57:56.256510 | orchestrator | Friday 09 January 2026 00:57:43 +0000 (0:00:00.566) 0:11:37.580 ******** 2026-01-09 00:57:56.256513 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.256517 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.256521 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.256525 | orchestrator | 2026-01-09 00:57:56.256529 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-09 00:57:56.256532 | orchestrator | Friday 09 January 2026 00:57:44 +0000 (0:00:01.191) 0:11:38.771 ******** 2026-01-09 00:57:56.256536 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.256540 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.256544 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.256547 | orchestrator | 2026-01-09 00:57:56.256551 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-09 00:57:56.256555 | orchestrator | Friday 09 January 2026 00:57:46 +0000 (0:00:01.591) 0:11:40.362 ******** 2026-01-09 00:57:56.256559 | orchestrator | changed: [testbed-node-3] 2026-01-09 00:57:56.256562 | orchestrator | changed: [testbed-node-5] 2026-01-09 00:57:56.256566 | orchestrator | changed: [testbed-node-4] 2026-01-09 00:57:56.256570 | orchestrator | 2026-01-09 00:57:56.256574 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-09 00:57:56.256577 | orchestrator | Friday 09 January 2026 00:57:47 +0000 (0:00:01.685) 0:11:42.048 ******** 2026-01-09 00:57:56.256635 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.256654 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.256658 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-09 00:57:56.256662 | orchestrator | 2026-01-09 00:57:56.256666 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-09 00:57:56.256670 | orchestrator | Friday 09 January 2026 00:57:50 +0000 (0:00:02.391) 0:11:44.439 ******** 2026-01-09 00:57:56.256674 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.256678 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.256681 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.256685 | orchestrator | 2026-01-09 00:57:56.256689 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-09 00:57:56.256693 | orchestrator | Friday 09 January 2026 00:57:50 +0000 (0:00:00.335) 0:11:44.774 ******** 2026-01-09 00:57:56.256702 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 00:57:56.256706 | orchestrator | 2026-01-09 00:57:56.256710 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-09 00:57:56.256713 | orchestrator | Friday 09 January 2026 00:57:50 +0000 (0:00:00.522) 0:11:45.297 ******** 2026-01-09 00:57:56.256717 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.256721 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.256725 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.256728 | orchestrator | 2026-01-09 00:57:56.256732 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-09 00:57:56.256736 | orchestrator | Friday 09 January 2026 00:57:51 +0000 (0:00:00.599) 0:11:45.897 ******** 2026-01-09 00:57:56.256740 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.256743 | orchestrator | skipping: [testbed-node-4] 2026-01-09 00:57:56.256747 | orchestrator | skipping: [testbed-node-5] 2026-01-09 00:57:56.256751 | orchestrator | 2026-01-09 00:57:56.256755 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-09 00:57:56.256758 | orchestrator | Friday 09 January 2026 00:57:51 +0000 (0:00:00.321) 0:11:46.218 ******** 2026-01-09 00:57:56.256762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 00:57:56.256766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 00:57:56.256770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 00:57:56.256773 | orchestrator | skipping: [testbed-node-3] 2026-01-09 00:57:56.256777 | orchestrator | 2026-01-09 00:57:56.256781 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-09 00:57:56.256785 | orchestrator | Friday 09 January 2026 00:57:52 +0000 (0:00:00.616) 0:11:46.835 ******** 2026-01-09 00:57:56.256788 | orchestrator | ok: [testbed-node-3] 2026-01-09 00:57:56.256792 | orchestrator | ok: [testbed-node-4] 2026-01-09 00:57:56.256796 | orchestrator | ok: [testbed-node-5] 2026-01-09 00:57:56.256800 | orchestrator | 2026-01-09 00:57:56.256803 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:57:56.256807 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-09 00:57:56.256811 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-09 00:57:56.256815 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-09 00:57:56.256819 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-09 00:57:56.256823 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-09 00:57:56.256833 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-09 00:57:56.256837 | orchestrator | 2026-01-09 00:57:56.256841 | orchestrator | 2026-01-09 00:57:56.256845 | orchestrator | 2026-01-09 00:57:56.256849 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:57:56.256852 | orchestrator | Friday 09 January 2026 00:57:52 +0000 (0:00:00.254) 0:11:47.090 ******** 2026-01-09 00:57:56.256856 | orchestrator | =============================================================================== 2026-01-09 00:57:56.256860 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 55.87s 2026-01-09 00:57:56.256864 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.30s 2026-01-09 00:57:56.256867 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.58s 2026-01-09 00:57:56.256874 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.48s 2026-01-09 00:57:56.256878 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.92s 2026-01-09 00:57:56.256882 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.68s 2026-01-09 00:57:56.256886 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.30s 2026-01-09 00:57:56.256889 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.60s 2026-01-09 00:57:56.256893 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.81s 2026-01-09 00:57:56.256897 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.08s 2026-01-09 00:57:56.256901 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.53s 2026-01-09 00:57:56.256904 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.73s 2026-01-09 00:57:56.256908 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.47s 2026-01-09 00:57:56.256915 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.18s 2026-01-09 00:57:56.256919 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.95s 2026-01-09 00:57:56.256923 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.47s 2026-01-09 00:57:56.256927 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.21s 2026-01-09 00:57:56.256930 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.06s 2026-01-09 00:57:56.256934 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.77s 2026-01-09 00:57:56.256938 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.71s 2026-01-09 00:57:56.256942 | orchestrator | 2026-01-09 00:57:56 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:56.256945 | orchestrator | 2026-01-09 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:57:59.300661 | orchestrator | 2026-01-09 00:57:59 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:57:59.302276 | orchestrator | 2026-01-09 00:57:59 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:57:59.304762 | orchestrator | 2026-01-09 00:57:59 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:57:59.304812 | orchestrator | 2026-01-09 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:02.358817 | orchestrator | 2026-01-09 00:58:02 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:02.362075 | orchestrator | 2026-01-09 00:58:02 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:58:02.364050 | orchestrator | 2026-01-09 00:58:02 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:58:02.364584 | orchestrator | 2026-01-09 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:05.417815 | orchestrator | 2026-01-09 00:58:05 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:05.419456 | orchestrator | 2026-01-09 00:58:05 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:58:05.422257 | orchestrator | 2026-01-09 00:58:05 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:58:05.422314 | orchestrator | 2026-01-09 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:08.473568 | orchestrator | 2026-01-09 00:58:08 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:08.475656 | orchestrator | 2026-01-09 00:58:08 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:58:08.477657 | orchestrator | 2026-01-09 00:58:08 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state STARTED 2026-01-09 00:58:08.477719 | orchestrator | 2026-01-09 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:11.527177 | orchestrator | 2026-01-09 00:58:11 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:11.528387 | orchestrator | 2026-01-09 00:58:11 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:58:11.530613 | orchestrator | 2026-01-09 00:58:11 | INFO  | Task 2f38303b-6d9b-4ea5-8f21-639171da3345 is in state SUCCESS 2026-01-09 00:58:11.531976 | orchestrator | 2026-01-09 00:58:11.532010 | orchestrator | 2026-01-09 00:58:11.532017 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 00:58:11.532023 | orchestrator | 2026-01-09 00:58:11.532028 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 00:58:11.532033 | orchestrator | Friday 09 January 2026 00:55:24 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-01-09 00:58:11.532038 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:11.532045 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:58:11.532050 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:58:11.532056 | orchestrator | 2026-01-09 00:58:11.532061 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 00:58:11.532066 | orchestrator | Friday 09 January 2026 00:55:24 +0000 (0:00:00.303) 0:00:00.570 ******** 2026-01-09 00:58:11.532071 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-09 00:58:11.532077 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-09 00:58:11.532081 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-09 00:58:11.532085 | orchestrator | 2026-01-09 00:58:11.532089 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-09 00:58:11.532093 | orchestrator | 2026-01-09 00:58:11.532097 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-09 00:58:11.532101 | orchestrator | Friday 09 January 2026 00:55:25 +0000 (0:00:00.451) 0:00:01.021 ******** 2026-01-09 00:58:11.532106 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:58:11.532110 | orchestrator | 2026-01-09 00:58:11.532114 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-09 00:58:11.532118 | orchestrator | Friday 09 January 2026 00:55:25 +0000 (0:00:00.540) 0:00:01.562 ******** 2026-01-09 00:58:11.532122 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-09 00:58:11.532126 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-09 00:58:11.532130 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-09 00:58:11.532134 | orchestrator | 2026-01-09 00:58:11.532138 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-09 00:58:11.532142 | orchestrator | Friday 09 January 2026 00:55:26 +0000 (0:00:00.778) 0:00:02.341 ******** 2026-01-09 00:58:11.532149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532227 | orchestrator | 2026-01-09 00:58:11.532231 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-09 00:58:11.532234 | orchestrator | Friday 09 January 2026 00:55:28 +0000 (0:00:01.721) 0:00:04.063 ******** 2026-01-09 00:58:11.532238 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:58:11.532242 | orchestrator | 2026-01-09 00:58:11.532246 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-09 00:58:11.532250 | orchestrator | Friday 09 January 2026 00:55:28 +0000 (0:00:00.551) 0:00:04.615 ******** 2026-01-09 00:58:11.532261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532296 | orchestrator | 2026-01-09 00:58:11.532300 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-09 00:58:11.532304 | orchestrator | Friday 09 January 2026 00:55:31 +0000 (0:00:02.940) 0:00:07.555 ******** 2026-01-09 00:58:11.532308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-09 00:58:11.532316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-09 00:58:11.532320 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:11.532327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-09 00:58:11.532334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-09 00:58:11.532338 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:11.532342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-09 00:58:11.532350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-09 00:58:11.532354 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:11.532358 | orchestrator | 2026-01-09 00:58:11.532361 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-09 00:58:11.532365 | orchestrator | Friday 09 January 2026 00:55:32 +0000 (0:00:01.243) 0:00:08.799 ******** 2026-01-09 00:58:11.532372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-09 00:58:11.532379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-09 00:58:11.532383 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:11.532387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-09 00:58:11.532395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-09 00:58:11.532399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-09 00:58:11.532411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-09 00:58:11.532415 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:11.532419 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:11.532423 | orchestrator | 2026-01-09 00:58:11.532427 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-09 00:58:11.532430 | orchestrator | Friday 09 January 2026 00:55:33 +0000 (0:00:01.045) 0:00:09.845 ******** 2026-01-09 00:58:11.532434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532511 | orchestrator | 2026-01-09 00:58:11.532515 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-09 00:58:11.532518 | orchestrator | Friday 09 January 2026 00:55:36 +0000 (0:00:02.597) 0:00:12.442 ******** 2026-01-09 00:58:11.532522 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:11.532526 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:58:11.532530 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:58:11.532534 | orchestrator | 2026-01-09 00:58:11.532537 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-09 00:58:11.532541 | orchestrator | Friday 09 January 2026 00:55:39 +0000 (0:00:03.304) 0:00:15.747 ******** 2026-01-09 00:58:11.532545 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:11.532549 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:58:11.532552 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:58:11.532556 | orchestrator | 2026-01-09 00:58:11.532560 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-09 00:58:11.532563 | orchestrator | Friday 09 January 2026 00:55:42 +0000 (0:00:02.178) 0:00:17.925 ******** 2026-01-09 00:58:11.532570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-09 00:58:11.532590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-09 00:58:11.532611 | orchestrator | 2026-01-09 00:58:11.532615 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-09 00:58:11.532619 | orchestrator | Friday 09 January 2026 00:55:44 +0000 (0:00:02.436) 0:00:20.361 ******** 2026-01-09 00:58:11.532623 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:11.532627 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:11.532631 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:11.532634 | orchestrator | 2026-01-09 00:58:11.532639 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-09 00:58:11.532646 | orchestrator | Friday 09 January 2026 00:55:44 +0000 (0:00:00.323) 0:00:20.685 ******** 2026-01-09 00:58:11.532652 | orchestrator | 2026-01-09 00:58:11.532657 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-09 00:58:11.532663 | orchestrator | Friday 09 January 2026 00:55:44 +0000 (0:00:00.080) 0:00:20.765 ******** 2026-01-09 00:58:11.532670 | orchestrator | 2026-01-09 00:58:11.532675 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-09 00:58:11.532681 | orchestrator | Friday 09 January 2026 00:55:44 +0000 (0:00:00.064) 0:00:20.830 ******** 2026-01-09 00:58:11.532687 | orchestrator | 2026-01-09 00:58:11.532693 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-09 00:58:11.532698 | orchestrator | Friday 09 January 2026 00:55:44 +0000 (0:00:00.070) 0:00:20.901 ******** 2026-01-09 00:58:11.532704 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:11.532710 | orchestrator | 2026-01-09 00:58:11.532716 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-09 00:58:11.532722 | orchestrator | Friday 09 January 2026 00:55:45 +0000 (0:00:00.219) 0:00:21.120 ******** 2026-01-09 00:58:11.532728 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:11.532734 | orchestrator | 2026-01-09 00:58:11.532741 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-09 00:58:11.532747 | orchestrator | Friday 09 January 2026 00:55:45 +0000 (0:00:00.698) 0:00:21.819 ******** 2026-01-09 00:58:11.532753 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:11.532759 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:58:11.532765 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:58:11.532772 | orchestrator | 2026-01-09 00:58:11.532778 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-09 00:58:11.532784 | orchestrator | Friday 09 January 2026 00:56:41 +0000 (0:00:55.798) 0:01:17.617 ******** 2026-01-09 00:58:11.532790 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:11.532796 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:58:11.532802 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:58:11.532808 | orchestrator | 2026-01-09 00:58:11.532815 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-09 00:58:11.532821 | orchestrator | Friday 09 January 2026 00:57:56 +0000 (0:01:14.502) 0:02:32.120 ******** 2026-01-09 00:58:11.532827 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:58:11.532833 | orchestrator | 2026-01-09 00:58:11.532839 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-09 00:58:11.532845 | orchestrator | Friday 09 January 2026 00:57:56 +0000 (0:00:00.719) 0:02:32.839 ******** 2026-01-09 00:58:11.532851 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:11.532857 | orchestrator | 2026-01-09 00:58:11.532864 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-01-09 00:58:11.532870 | orchestrator | Friday 09 January 2026 00:57:59 +0000 (0:00:02.694) 0:02:35.534 ******** 2026-01-09 00:58:11.532876 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:11.532882 | orchestrator | 2026-01-09 00:58:11.532893 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-09 00:58:11.532900 | orchestrator | Friday 09 January 2026 00:58:02 +0000 (0:00:02.573) 0:02:38.108 ******** 2026-01-09 00:58:11.532906 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:11.532912 | orchestrator | 2026-01-09 00:58:11.532918 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-09 00:58:11.532924 | orchestrator | Friday 09 January 2026 00:58:04 +0000 (0:00:02.609) 0:02:40.717 ******** 2026-01-09 00:58:11.532930 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:11.532936 | orchestrator | 2026-01-09 00:58:11.532942 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-09 00:58:11.532948 | orchestrator | Friday 09 January 2026 00:58:07 +0000 (0:00:03.036) 0:02:43.754 ******** 2026-01-09 00:58:11.532955 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:11.532961 | orchestrator | 2026-01-09 00:58:11.532967 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:58:11.532978 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-09 00:58:11.532985 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-09 00:58:11.532995 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-09 00:58:11.533001 | orchestrator | 2026-01-09 00:58:11.533007 | orchestrator | 2026-01-09 00:58:11.533014 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:58:11.533020 | orchestrator | Friday 09 January 2026 00:58:10 +0000 (0:00:02.565) 0:02:46.319 ******** 2026-01-09 00:58:11.533026 | orchestrator | =============================================================================== 2026-01-09 00:58:11.533032 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 74.50s 2026-01-09 00:58:11.533039 | orchestrator | opensearch : Restart opensearch container ------------------------------ 55.80s 2026-01-09 00:58:11.533045 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.30s 2026-01-09 00:58:11.533051 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.04s 2026-01-09 00:58:11.533057 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.94s 2026-01-09 00:58:11.533063 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.69s 2026-01-09 00:58:11.533069 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.61s 2026-01-09 00:58:11.533075 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.60s 2026-01-09 00:58:11.533081 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.57s 2026-01-09 00:58:11.533088 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.57s 2026-01-09 00:58:11.533094 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.44s 2026-01-09 00:58:11.533100 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.18s 2026-01-09 00:58:11.533106 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.72s 2026-01-09 00:58:11.533113 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.24s 2026-01-09 00:58:11.533119 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.05s 2026-01-09 00:58:11.533125 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.78s 2026-01-09 00:58:11.533131 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.72s 2026-01-09 00:58:11.533137 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.70s 2026-01-09 00:58:11.533143 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-01-09 00:58:11.533157 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-01-09 00:58:11.533163 | orchestrator | 2026-01-09 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:14.580992 | orchestrator | 2026-01-09 00:58:14 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:14.581105 | orchestrator | 2026-01-09 00:58:14 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:58:14.581160 | orchestrator | 2026-01-09 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:17.634426 | orchestrator | 2026-01-09 00:58:17 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:17.635895 | orchestrator | 2026-01-09 00:58:17 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:58:17.636445 | orchestrator | 2026-01-09 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:20.686762 | orchestrator | 2026-01-09 00:58:20 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:20.688383 | orchestrator | 2026-01-09 00:58:20 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:58:20.688469 | orchestrator | 2026-01-09 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:23.737480 | orchestrator | 2026-01-09 00:58:23 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:23.739883 | orchestrator | 2026-01-09 00:58:23 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:58:23.739932 | orchestrator | 2026-01-09 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:26.784694 | orchestrator | 2026-01-09 00:58:26 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:26.786533 | orchestrator | 2026-01-09 00:58:26 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state STARTED 2026-01-09 00:58:26.786596 | orchestrator | 2026-01-09 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:29.829019 | orchestrator | 2026-01-09 00:58:29 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:29.834138 | orchestrator | 2026-01-09 00:58:29 | INFO  | Task e8a807f9-9c1a-43fd-9f0b-4169f980e7c6 is in state SUCCESS 2026-01-09 00:58:29.834920 | orchestrator | 2026-01-09 00:58:29.835171 | orchestrator | 2026-01-09 00:58:29.835187 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-09 00:58:29.835197 | orchestrator | 2026-01-09 00:58:29.835203 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-09 00:58:29.835210 | orchestrator | Friday 09 January 2026 00:55:24 +0000 (0:00:00.097) 0:00:00.097 ******** 2026-01-09 00:58:29.835217 | orchestrator | ok: [localhost] => { 2026-01-09 00:58:29.835226 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-09 00:58:29.835233 | orchestrator | } 2026-01-09 00:58:29.835240 | orchestrator | 2026-01-09 00:58:29.835247 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-09 00:58:29.835254 | orchestrator | Friday 09 January 2026 00:55:24 +0000 (0:00:00.059) 0:00:00.157 ******** 2026-01-09 00:58:29.835261 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-09 00:58:29.835269 | orchestrator | ...ignoring 2026-01-09 00:58:29.835276 | orchestrator | 2026-01-09 00:58:29.835282 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-09 00:58:29.835289 | orchestrator | Friday 09 January 2026 00:55:27 +0000 (0:00:02.888) 0:00:03.048 ******** 2026-01-09 00:58:29.835295 | orchestrator | skipping: [localhost] 2026-01-09 00:58:29.835327 | orchestrator | 2026-01-09 00:58:29.835334 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-09 00:58:29.835341 | orchestrator | Friday 09 January 2026 00:55:27 +0000 (0:00:00.053) 0:00:03.102 ******** 2026-01-09 00:58:29.835348 | orchestrator | ok: [localhost] 2026-01-09 00:58:29.835355 | orchestrator | 2026-01-09 00:58:29.835361 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 00:58:29.835368 | orchestrator | 2026-01-09 00:58:29.835375 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 00:58:29.835381 | orchestrator | Friday 09 January 2026 00:55:27 +0000 (0:00:00.154) 0:00:03.256 ******** 2026-01-09 00:58:29.835388 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:29.835395 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:58:29.835402 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:58:29.835408 | orchestrator | 2026-01-09 00:58:29.835415 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 00:58:29.835422 | orchestrator | Friday 09 January 2026 00:55:27 +0000 (0:00:00.333) 0:00:03.590 ******** 2026-01-09 00:58:29.835429 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-09 00:58:29.835437 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-09 00:58:29.835443 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-09 00:58:29.835450 | orchestrator | 2026-01-09 00:58:29.835457 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-09 00:58:29.835463 | orchestrator | 2026-01-09 00:58:29.835469 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-09 00:58:29.835475 | orchestrator | Friday 09 January 2026 00:55:28 +0000 (0:00:00.631) 0:00:04.221 ******** 2026-01-09 00:58:29.835482 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-09 00:58:29.835489 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-09 00:58:29.835496 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-09 00:58:29.835502 | orchestrator | 2026-01-09 00:58:29.835509 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-09 00:58:29.835572 | orchestrator | Friday 09 January 2026 00:55:28 +0000 (0:00:00.396) 0:00:04.618 ******** 2026-01-09 00:58:29.835579 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:58:29.835588 | orchestrator | 2026-01-09 00:58:29.835594 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-09 00:58:29.835600 | orchestrator | Friday 09 January 2026 00:55:29 +0000 (0:00:00.518) 0:00:05.136 ******** 2026-01-09 00:58:29.835640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-09 00:58:29.835661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-09 00:58:29.835674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-09 00:58:29.835687 | orchestrator | 2026-01-09 00:58:29.835701 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-09 00:58:29.835709 | orchestrator | Friday 09 January 2026 00:55:32 +0000 (0:00:03.282) 0:00:08.419 ******** 2026-01-09 00:58:29.835716 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.835724 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.835731 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.835738 | orchestrator | 2026-01-09 00:58:29.835745 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-09 00:58:29.835752 | orchestrator | Friday 09 January 2026 00:55:33 +0000 (0:00:00.952) 0:00:09.371 ******** 2026-01-09 00:58:29.835759 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.835765 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.835772 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.835779 | orchestrator | 2026-01-09 00:58:29.835785 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-09 00:58:29.835792 | orchestrator | Friday 09 January 2026 00:55:34 +0000 (0:00:01.548) 0:00:10.920 ******** 2026-01-09 00:58:29.835800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-09 00:58:29.835816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-09 00:58:29.835829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-09 00:58:29.835837 | orchestrator | 2026-01-09 00:58:29.835844 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-09 00:58:29.835851 | orchestrator | Friday 09 January 2026 00:55:39 +0000 (0:00:04.551) 0:00:15.471 ******** 2026-01-09 00:58:29.835858 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.835864 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.835871 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.835878 | orchestrator | 2026-01-09 00:58:29.835884 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-09 00:58:29.835891 | orchestrator | Friday 09 January 2026 00:55:40 +0000 (0:00:01.248) 0:00:16.719 ******** 2026-01-09 00:58:29.835898 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:58:29.835905 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.835912 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:58:29.835918 | orchestrator | 2026-01-09 00:58:29.835925 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-09 00:58:29.835932 | orchestrator | Friday 09 January 2026 00:55:45 +0000 (0:00:04.864) 0:00:21.584 ******** 2026-01-09 00:58:29.835939 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:58:29.835946 | orchestrator | 2026-01-09 00:58:29.835952 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-09 00:58:29.835959 | orchestrator | Friday 09 January 2026 00:55:46 +0000 (0:00:00.594) 0:00:22.179 ******** 2026-01-09 00:58:29.835983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:58:29.835992 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:29.835999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:58:29.836005 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.836021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:58:29.836033 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.836040 | orchestrator | 2026-01-09 00:58:29.836047 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-09 00:58:29.836054 | orchestrator | Friday 09 January 2026 00:55:50 +0000 (0:00:04.189) 0:00:26.369 ******** 2026-01-09 00:58:29.836062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:58:29.836069 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:29.836084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:58:29.836103 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.836111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:58:29.836118 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.836125 | orchestrator | 2026-01-09 00:58:29.836132 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-09 00:58:29.836139 | orchestrator | Friday 09 January 2026 00:55:53 +0000 (0:00:03.065) 0:00:29.434 ******** 2026-01-09 00:58:29.836151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:58:29.836164 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.836177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:58:29.836185 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.836192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-09 00:58:29.836205 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:29.836212 | orchestrator | 2026-01-09 00:58:29.836219 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-09 00:58:29.836226 | orchestrator | Friday 09 January 2026 00:55:56 +0000 (0:00:03.038) 0:00:32.473 ******** 2026-01-09 00:58:29.836242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-09 00:58:29.836251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-09 00:58:29.836273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-09 00:58:29.836281 | orchestrator | 2026-01-09 00:58:29.836288 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-09 00:58:29.836295 | orchestrator | Friday 09 January 2026 00:56:00 +0000 (0:00:04.154) 0:00:36.628 ******** 2026-01-09 00:58:29.836302 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.836309 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:58:29.836316 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:58:29.836322 | orchestrator | 2026-01-09 00:58:29.836329 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-09 00:58:29.836336 | orchestrator | Friday 09 January 2026 00:56:01 +0000 (0:00:00.823) 0:00:37.451 ******** 2026-01-09 00:58:29.836343 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:29.836351 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:58:29.836358 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:58:29.836364 | orchestrator | 2026-01-09 00:58:29.836370 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-09 00:58:29.836381 | orchestrator | Friday 09 January 2026 00:56:02 +0000 (0:00:00.615) 0:00:38.067 ******** 2026-01-09 00:58:29.836387 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:29.836393 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:58:29.836400 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:58:29.836406 | orchestrator | 2026-01-09 00:58:29.836412 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-09 00:58:29.836420 | orchestrator | Friday 09 January 2026 00:56:02 +0000 (0:00:00.481) 0:00:38.548 ******** 2026-01-09 00:58:29.836428 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-09 00:58:29.836435 | orchestrator | ...ignoring 2026-01-09 00:58:29.836443 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-09 00:58:29.836449 | orchestrator | ...ignoring 2026-01-09 00:58:29.836456 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-09 00:58:29.836463 | orchestrator | ...ignoring 2026-01-09 00:58:29.836470 | orchestrator | 2026-01-09 00:58:29.836476 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-09 00:58:29.836483 | orchestrator | Friday 09 January 2026 00:56:13 +0000 (0:00:10.918) 0:00:49.467 ******** 2026-01-09 00:58:29.836490 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:29.836497 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:58:29.836503 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:58:29.836539 | orchestrator | 2026-01-09 00:58:29.836547 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-09 00:58:29.836554 | orchestrator | Friday 09 January 2026 00:56:13 +0000 (0:00:00.421) 0:00:49.888 ******** 2026-01-09 00:58:29.836559 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:29.836566 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.836572 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.836578 | orchestrator | 2026-01-09 00:58:29.836584 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-09 00:58:29.836705 | orchestrator | Friday 09 January 2026 00:56:14 +0000 (0:00:00.698) 0:00:50.586 ******** 2026-01-09 00:58:29.836715 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:29.836722 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.836728 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.836735 | orchestrator | 2026-01-09 00:58:29.836742 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-09 00:58:29.836749 | orchestrator | Friday 09 January 2026 00:56:15 +0000 (0:00:00.416) 0:00:51.003 ******** 2026-01-09 00:58:29.836756 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:29.836763 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.836770 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.836776 | orchestrator | 2026-01-09 00:58:29.836785 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-09 00:58:29.836795 | orchestrator | Friday 09 January 2026 00:56:15 +0000 (0:00:00.435) 0:00:51.438 ******** 2026-01-09 00:58:29.836802 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:29.836808 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:58:29.836815 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:58:29.836821 | orchestrator | 2026-01-09 00:58:29.836835 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-09 00:58:29.836842 | orchestrator | Friday 09 January 2026 00:56:15 +0000 (0:00:00.411) 0:00:51.849 ******** 2026-01-09 00:58:29.836856 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:29.836863 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.836870 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.836876 | orchestrator | 2026-01-09 00:58:29.836883 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-09 00:58:29.836897 | orchestrator | Friday 09 January 2026 00:56:16 +0000 (0:00:00.656) 0:00:52.506 ******** 2026-01-09 00:58:29.836908 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.836920 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.836933 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-09 00:58:29.836949 | orchestrator | 2026-01-09 00:58:29.836957 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-09 00:58:29.836964 | orchestrator | Friday 09 January 2026 00:56:16 +0000 (0:00:00.403) 0:00:52.909 ******** 2026-01-09 00:58:29.836970 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.836976 | orchestrator | 2026-01-09 00:58:29.836983 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-09 00:58:29.836989 | orchestrator | Friday 09 January 2026 00:56:27 +0000 (0:00:10.421) 0:01:03.331 ******** 2026-01-09 00:58:29.836995 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:29.837002 | orchestrator | 2026-01-09 00:58:29.837008 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-09 00:58:29.837013 | orchestrator | Friday 09 January 2026 00:56:27 +0000 (0:00:00.131) 0:01:03.462 ******** 2026-01-09 00:58:29.837019 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:29.837025 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.837031 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.837037 | orchestrator | 2026-01-09 00:58:29.837044 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-09 00:58:29.837050 | orchestrator | Friday 09 January 2026 00:56:28 +0000 (0:00:00.885) 0:01:04.348 ******** 2026-01-09 00:58:29.837057 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.837063 | orchestrator | 2026-01-09 00:58:29.837070 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-09 00:58:29.837075 | orchestrator | Friday 09 January 2026 00:56:35 +0000 (0:00:07.503) 0:01:11.852 ******** 2026-01-09 00:58:29.837081 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:29.837086 | orchestrator | 2026-01-09 00:58:29.837093 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-09 00:58:29.837100 | orchestrator | Friday 09 January 2026 00:56:37 +0000 (0:00:01.626) 0:01:13.479 ******** 2026-01-09 00:58:29.837106 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:29.837113 | orchestrator | 2026-01-09 00:58:29.837119 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-09 00:58:29.837125 | orchestrator | Friday 09 January 2026 00:56:40 +0000 (0:00:02.628) 0:01:16.107 ******** 2026-01-09 00:58:29.837132 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.837138 | orchestrator | 2026-01-09 00:58:29.837144 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-09 00:58:29.837151 | orchestrator | Friday 09 January 2026 00:56:40 +0000 (0:00:00.137) 0:01:16.245 ******** 2026-01-09 00:58:29.837157 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:29.837163 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.837170 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.837176 | orchestrator | 2026-01-09 00:58:29.837183 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-09 00:58:29.837189 | orchestrator | Friday 09 January 2026 00:56:40 +0000 (0:00:00.352) 0:01:16.598 ******** 2026-01-09 00:58:29.837196 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:29.837202 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-09 00:58:29.837208 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:58:29.837215 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:58:29.837221 | orchestrator | 2026-01-09 00:58:29.837227 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-09 00:58:29.837234 | orchestrator | skipping: no hosts matched 2026-01-09 00:58:29.837240 | orchestrator | 2026-01-09 00:58:29.837247 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-09 00:58:29.837259 | orchestrator | 2026-01-09 00:58:29.837264 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-09 00:58:29.837271 | orchestrator | Friday 09 January 2026 00:56:41 +0000 (0:00:00.610) 0:01:17.208 ******** 2026-01-09 00:58:29.837277 | orchestrator | changed: [testbed-node-1] 2026-01-09 00:58:29.837286 | orchestrator | 2026-01-09 00:58:29.837293 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-09 00:58:29.837299 | orchestrator | Friday 09 January 2026 00:56:59 +0000 (0:00:17.860) 0:01:35.069 ******** 2026-01-09 00:58:29.837305 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:58:29.837312 | orchestrator | 2026-01-09 00:58:29.837319 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-09 00:58:29.837325 | orchestrator | Friday 09 January 2026 00:57:14 +0000 (0:00:15.586) 0:01:50.655 ******** 2026-01-09 00:58:29.837332 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:58:29.837341 | orchestrator | 2026-01-09 00:58:29.837348 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-09 00:58:29.837356 | orchestrator | 2026-01-09 00:58:29.837363 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-09 00:58:29.837370 | orchestrator | Friday 09 January 2026 00:57:17 +0000 (0:00:02.447) 0:01:53.102 ******** 2026-01-09 00:58:29.837377 | orchestrator | changed: [testbed-node-2] 2026-01-09 00:58:29.837385 | orchestrator | 2026-01-09 00:58:29.837392 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-09 00:58:29.837400 | orchestrator | Friday 09 January 2026 00:57:35 +0000 (0:00:17.853) 0:02:10.955 ******** 2026-01-09 00:58:29.837408 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:58:29.837415 | orchestrator | 2026-01-09 00:58:29.837422 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-09 00:58:29.837434 | orchestrator | Friday 09 January 2026 00:57:50 +0000 (0:00:15.603) 0:02:26.559 ******** 2026-01-09 00:58:29.837441 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:58:29.837448 | orchestrator | 2026-01-09 00:58:29.837455 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-09 00:58:29.837462 | orchestrator | 2026-01-09 00:58:29.837476 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-09 00:58:29.837484 | orchestrator | Friday 09 January 2026 00:57:53 +0000 (0:00:02.683) 0:02:29.242 ******** 2026-01-09 00:58:29.837492 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.837499 | orchestrator | 2026-01-09 00:58:29.837506 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-09 00:58:29.837562 | orchestrator | Friday 09 January 2026 00:58:10 +0000 (0:00:17.400) 0:02:46.643 ******** 2026-01-09 00:58:29.837570 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:29.837576 | orchestrator | 2026-01-09 00:58:29.837583 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-09 00:58:29.837589 | orchestrator | Friday 09 January 2026 00:58:11 +0000 (0:00:00.595) 0:02:47.238 ******** 2026-01-09 00:58:29.837595 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:29.837602 | orchestrator | 2026-01-09 00:58:29.837608 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-09 00:58:29.837615 | orchestrator | 2026-01-09 00:58:29.837621 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-09 00:58:29.837627 | orchestrator | Friday 09 January 2026 00:58:14 +0000 (0:00:02.802) 0:02:50.041 ******** 2026-01-09 00:58:29.837634 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 00:58:29.837641 | orchestrator | 2026-01-09 00:58:29.837647 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-09 00:58:29.837654 | orchestrator | Friday 09 January 2026 00:58:14 +0000 (0:00:00.569) 0:02:50.610 ******** 2026-01-09 00:58:29.837660 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.837666 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.837673 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.837679 | orchestrator | 2026-01-09 00:58:29.837686 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-09 00:58:29.837699 | orchestrator | Friday 09 January 2026 00:58:17 +0000 (0:00:02.688) 0:02:53.298 ******** 2026-01-09 00:58:29.837705 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.837711 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.837717 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.837724 | orchestrator | 2026-01-09 00:58:29.837731 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-09 00:58:29.837737 | orchestrator | Friday 09 January 2026 00:58:19 +0000 (0:00:02.521) 0:02:55.819 ******** 2026-01-09 00:58:29.837743 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.837750 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.837756 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.837763 | orchestrator | 2026-01-09 00:58:29.837770 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-09 00:58:29.837777 | orchestrator | Friday 09 January 2026 00:58:22 +0000 (0:00:02.409) 0:02:58.228 ******** 2026-01-09 00:58:29.837784 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.837789 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.837796 | orchestrator | changed: [testbed-node-0] 2026-01-09 00:58:29.837802 | orchestrator | 2026-01-09 00:58:29.837809 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-09 00:58:29.837815 | orchestrator | Friday 09 January 2026 00:58:24 +0000 (0:00:02.099) 0:03:00.328 ******** 2026-01-09 00:58:29.837821 | orchestrator | ok: [testbed-node-0] 2026-01-09 00:58:29.837828 | orchestrator | ok: [testbed-node-1] 2026-01-09 00:58:29.837833 | orchestrator | ok: [testbed-node-2] 2026-01-09 00:58:29.837838 | orchestrator | 2026-01-09 00:58:29.837845 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-09 00:58:29.837850 | orchestrator | Friday 09 January 2026 00:58:27 +0000 (0:00:03.200) 0:03:03.528 ******** 2026-01-09 00:58:29.837856 | orchestrator | skipping: [testbed-node-0] 2026-01-09 00:58:29.837862 | orchestrator | skipping: [testbed-node-1] 2026-01-09 00:58:29.837869 | orchestrator | skipping: [testbed-node-2] 2026-01-09 00:58:29.837875 | orchestrator | 2026-01-09 00:58:29.837882 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 00:58:29.837889 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-09 00:58:29.837898 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-09 00:58:29.837906 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-09 00:58:29.837912 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-09 00:58:29.837918 | orchestrator | 2026-01-09 00:58:29.837925 | orchestrator | 2026-01-09 00:58:29.837931 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 00:58:29.837938 | orchestrator | Friday 09 January 2026 00:58:27 +0000 (0:00:00.237) 0:03:03.765 ******** 2026-01-09 00:58:29.837945 | orchestrator | =============================================================================== 2026-01-09 00:58:29.837951 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.71s 2026-01-09 00:58:29.837958 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.19s 2026-01-09 00:58:29.837964 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.40s 2026-01-09 00:58:29.837971 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.92s 2026-01-09 00:58:29.837977 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.42s 2026-01-09 00:58:29.837990 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.50s 2026-01-09 00:58:29.838010 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.13s 2026-01-09 00:58:29.838068 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.87s 2026-01-09 00:58:29.838076 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.55s 2026-01-09 00:58:29.838082 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.19s 2026-01-09 00:58:29.838089 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.15s 2026-01-09 00:58:29.838096 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.28s 2026-01-09 00:58:29.838103 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.20s 2026-01-09 00:58:29.838110 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.07s 2026-01-09 00:58:29.838117 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.04s 2026-01-09 00:58:29.838124 | orchestrator | Check MariaDB service --------------------------------------------------- 2.89s 2026-01-09 00:58:29.838131 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.80s 2026-01-09 00:58:29.838138 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.69s 2026-01-09 00:58:29.838145 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.63s 2026-01-09 00:58:29.838151 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.52s 2026-01-09 00:58:29.838159 | orchestrator | 2026-01-09 00:58:29 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:58:29.838166 | orchestrator | 2026-01-09 00:58:29 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:58:29.838173 | orchestrator | 2026-01-09 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:32.892495 | orchestrator | 2026-01-09 00:58:32 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:32.893849 | orchestrator | 2026-01-09 00:58:32 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:58:32.896228 | orchestrator | 2026-01-09 00:58:32 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:58:32.896274 | orchestrator | 2026-01-09 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:35.938372 | orchestrator | 2026-01-09 00:58:35 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:35.942163 | orchestrator | 2026-01-09 00:58:35 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:58:35.943249 | orchestrator | 2026-01-09 00:58:35 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:58:35.943375 | orchestrator | 2026-01-09 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:38.988246 | orchestrator | 2026-01-09 00:58:38 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:38.990788 | orchestrator | 2026-01-09 00:58:38 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:58:38.992393 | orchestrator | 2026-01-09 00:58:38 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:58:38.992497 | orchestrator | 2026-01-09 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:42.041230 | orchestrator | 2026-01-09 00:58:42 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:42.043427 | orchestrator | 2026-01-09 00:58:42 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:58:42.047268 | orchestrator | 2026-01-09 00:58:42 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:58:42.047322 | orchestrator | 2026-01-09 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:45.082839 | orchestrator | 2026-01-09 00:58:45 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:45.084132 | orchestrator | 2026-01-09 00:58:45 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:58:45.085755 | orchestrator | 2026-01-09 00:58:45 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:58:45.085781 | orchestrator | 2026-01-09 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:48.122607 | orchestrator | 2026-01-09 00:58:48 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:48.125356 | orchestrator | 2026-01-09 00:58:48 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:58:48.126222 | orchestrator | 2026-01-09 00:58:48 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:58:48.126264 | orchestrator | 2026-01-09 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:51.159761 | orchestrator | 2026-01-09 00:58:51 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:51.162259 | orchestrator | 2026-01-09 00:58:51 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:58:51.165264 | orchestrator | 2026-01-09 00:58:51 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:58:51.165304 | orchestrator | 2026-01-09 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:54.209357 | orchestrator | 2026-01-09 00:58:54 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:54.213277 | orchestrator | 2026-01-09 00:58:54 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:58:54.213465 | orchestrator | 2026-01-09 00:58:54 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:58:54.213476 | orchestrator | 2026-01-09 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:58:57.252393 | orchestrator | 2026-01-09 00:58:57 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:58:57.256278 | orchestrator | 2026-01-09 00:58:57 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:58:57.257906 | orchestrator | 2026-01-09 00:58:57 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:58:57.257980 | orchestrator | 2026-01-09 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:00.303102 | orchestrator | 2026-01-09 00:59:00 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:00.305182 | orchestrator | 2026-01-09 00:59:00 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:00.308377 | orchestrator | 2026-01-09 00:59:00 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:00.308447 | orchestrator | 2026-01-09 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:03.356687 | orchestrator | 2026-01-09 00:59:03 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:03.356770 | orchestrator | 2026-01-09 00:59:03 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:03.356776 | orchestrator | 2026-01-09 00:59:03 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:03.356803 | orchestrator | 2026-01-09 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:06.392451 | orchestrator | 2026-01-09 00:59:06 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:06.393355 | orchestrator | 2026-01-09 00:59:06 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:06.394175 | orchestrator | 2026-01-09 00:59:06 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:06.394530 | orchestrator | 2026-01-09 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:09.437952 | orchestrator | 2026-01-09 00:59:09 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:09.440142 | orchestrator | 2026-01-09 00:59:09 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:09.441855 | orchestrator | 2026-01-09 00:59:09 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:09.441911 | orchestrator | 2026-01-09 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:12.491117 | orchestrator | 2026-01-09 00:59:12 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:12.492761 | orchestrator | 2026-01-09 00:59:12 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:12.496278 | orchestrator | 2026-01-09 00:59:12 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:12.496329 | orchestrator | 2026-01-09 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:15.549295 | orchestrator | 2026-01-09 00:59:15 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:15.551492 | orchestrator | 2026-01-09 00:59:15 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:15.556998 | orchestrator | 2026-01-09 00:59:15 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:15.557072 | orchestrator | 2026-01-09 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:18.598787 | orchestrator | 2026-01-09 00:59:18 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:18.601285 | orchestrator | 2026-01-09 00:59:18 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:18.603394 | orchestrator | 2026-01-09 00:59:18 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:18.603472 | orchestrator | 2026-01-09 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:21.652007 | orchestrator | 2026-01-09 00:59:21 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:21.654204 | orchestrator | 2026-01-09 00:59:21 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:21.656883 | orchestrator | 2026-01-09 00:59:21 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:21.657233 | orchestrator | 2026-01-09 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:24.700850 | orchestrator | 2026-01-09 00:59:24 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:24.703293 | orchestrator | 2026-01-09 00:59:24 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:24.706197 | orchestrator | 2026-01-09 00:59:24 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:24.706446 | orchestrator | 2026-01-09 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:27.740618 | orchestrator | 2026-01-09 00:59:27 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:27.741386 | orchestrator | 2026-01-09 00:59:27 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:27.743524 | orchestrator | 2026-01-09 00:59:27 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:27.743572 | orchestrator | 2026-01-09 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:30.791286 | orchestrator | 2026-01-09 00:59:30 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:30.796820 | orchestrator | 2026-01-09 00:59:30 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:30.799952 | orchestrator | 2026-01-09 00:59:30 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:30.800026 | orchestrator | 2026-01-09 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:33.855472 | orchestrator | 2026-01-09 00:59:33 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:33.858991 | orchestrator | 2026-01-09 00:59:33 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:33.861013 | orchestrator | 2026-01-09 00:59:33 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:33.861064 | orchestrator | 2026-01-09 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:36.910062 | orchestrator | 2026-01-09 00:59:36 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:36.911981 | orchestrator | 2026-01-09 00:59:36 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:36.915453 | orchestrator | 2026-01-09 00:59:36 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:36.915500 | orchestrator | 2026-01-09 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:39.961621 | orchestrator | 2026-01-09 00:59:39 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:39.962833 | orchestrator | 2026-01-09 00:59:39 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:39.965307 | orchestrator | 2026-01-09 00:59:39 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:39.965362 | orchestrator | 2026-01-09 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:43.009740 | orchestrator | 2026-01-09 00:59:43 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:43.013138 | orchestrator | 2026-01-09 00:59:43 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:43.015085 | orchestrator | 2026-01-09 00:59:43 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:43.015175 | orchestrator | 2026-01-09 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:46.053512 | orchestrator | 2026-01-09 00:59:46 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:46.054005 | orchestrator | 2026-01-09 00:59:46 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:46.056106 | orchestrator | 2026-01-09 00:59:46 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:46.056156 | orchestrator | 2026-01-09 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:49.101424 | orchestrator | 2026-01-09 00:59:49 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:49.103646 | orchestrator | 2026-01-09 00:59:49 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:49.106754 | orchestrator | 2026-01-09 00:59:49 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:49.106843 | orchestrator | 2026-01-09 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:52.165575 | orchestrator | 2026-01-09 00:59:52 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:52.167353 | orchestrator | 2026-01-09 00:59:52 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:52.170269 | orchestrator | 2026-01-09 00:59:52 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:52.171019 | orchestrator | 2026-01-09 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:55.214300 | orchestrator | 2026-01-09 00:59:55 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:55.216001 | orchestrator | 2026-01-09 00:59:55 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:55.217025 | orchestrator | 2026-01-09 00:59:55 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:55.217076 | orchestrator | 2026-01-09 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-09 00:59:58.265829 | orchestrator | 2026-01-09 00:59:58 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 00:59:58.267875 | orchestrator | 2026-01-09 00:59:58 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 00:59:58.269592 | orchestrator | 2026-01-09 00:59:58 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 00:59:58.269652 | orchestrator | 2026-01-09 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:01.319204 | orchestrator | 2026-01-09 01:00:01 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 01:00:01.319847 | orchestrator | 2026-01-09 01:00:01 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 01:00:01.321074 | orchestrator | 2026-01-09 01:00:01 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:01.321118 | orchestrator | 2026-01-09 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:04.376890 | orchestrator | 2026-01-09 01:00:04 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 01:00:04.377008 | orchestrator | 2026-01-09 01:00:04 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 01:00:04.378244 | orchestrator | 2026-01-09 01:00:04 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:04.378305 | orchestrator | 2026-01-09 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:07.420481 | orchestrator | 2026-01-09 01:00:07 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 01:00:07.422178 | orchestrator | 2026-01-09 01:00:07 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 01:00:07.423891 | orchestrator | 2026-01-09 01:00:07 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:07.423916 | orchestrator | 2026-01-09 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:10.467391 | orchestrator | 2026-01-09 01:00:10 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state STARTED 2026-01-09 01:00:10.468482 | orchestrator | 2026-01-09 01:00:10 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 01:00:10.469959 | orchestrator | 2026-01-09 01:00:10 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:10.469999 | orchestrator | 2026-01-09 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:13.521250 | orchestrator | 2026-01-09 01:00:13 | INFO  | Task f1b31d96-d903-4d7e-8ef9-347394298340 is in state SUCCESS 2026-01-09 01:00:13.521669 | orchestrator | 2026-01-09 01:00:13.524080 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-09 01:00:13.524272 | orchestrator | 2.16.14 2026-01-09 01:00:13.524286 | orchestrator | 2026-01-09 01:00:13.524576 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-09 01:00:13.524596 | orchestrator | 2026-01-09 01:00:13.524604 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-09 01:00:13.524613 | orchestrator | Friday 09 January 2026 00:57:58 +0000 (0:00:00.608) 0:00:00.608 ******** 2026-01-09 01:00:13.524621 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 01:00:13.524885 | orchestrator | 2026-01-09 01:00:13.524896 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-09 01:00:13.524904 | orchestrator | Friday 09 January 2026 00:57:58 +0000 (0:00:00.649) 0:00:01.257 ******** 2026-01-09 01:00:13.524911 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.524919 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.524927 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.524934 | orchestrator | 2026-01-09 01:00:13.524942 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-09 01:00:13.524950 | orchestrator | Friday 09 January 2026 00:57:59 +0000 (0:00:00.652) 0:00:01.910 ******** 2026-01-09 01:00:13.524957 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.524965 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.524972 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.524980 | orchestrator | 2026-01-09 01:00:13.524988 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-09 01:00:13.524996 | orchestrator | Friday 09 January 2026 00:57:59 +0000 (0:00:00.293) 0:00:02.203 ******** 2026-01-09 01:00:13.525003 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.525011 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.525018 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.525026 | orchestrator | 2026-01-09 01:00:13.525039 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-09 01:00:13.525050 | orchestrator | Friday 09 January 2026 00:58:00 +0000 (0:00:00.915) 0:00:03.119 ******** 2026-01-09 01:00:13.525062 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.525075 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.525087 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.525099 | orchestrator | 2026-01-09 01:00:13.525111 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-09 01:00:13.525124 | orchestrator | Friday 09 January 2026 00:58:00 +0000 (0:00:00.317) 0:00:03.436 ******** 2026-01-09 01:00:13.525135 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.525147 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.525178 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.525201 | orchestrator | 2026-01-09 01:00:13.525215 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-09 01:00:13.525228 | orchestrator | Friday 09 January 2026 00:58:01 +0000 (0:00:00.307) 0:00:03.743 ******** 2026-01-09 01:00:13.525240 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.525251 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.525264 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.525272 | orchestrator | 2026-01-09 01:00:13.525283 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-09 01:00:13.525296 | orchestrator | Friday 09 January 2026 00:58:01 +0000 (0:00:00.313) 0:00:04.056 ******** 2026-01-09 01:00:13.525340 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.525355 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.525368 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.525380 | orchestrator | 2026-01-09 01:00:13.525393 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-09 01:00:13.525406 | orchestrator | Friday 09 January 2026 00:58:02 +0000 (0:00:00.506) 0:00:04.562 ******** 2026-01-09 01:00:13.525418 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.525430 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.525441 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.525449 | orchestrator | 2026-01-09 01:00:13.525456 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-09 01:00:13.525464 | orchestrator | Friday 09 January 2026 00:58:02 +0000 (0:00:00.307) 0:00:04.870 ******** 2026-01-09 01:00:13.525472 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-09 01:00:13.525480 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-09 01:00:13.525487 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-09 01:00:13.525495 | orchestrator | 2026-01-09 01:00:13.525502 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-09 01:00:13.525510 | orchestrator | Friday 09 January 2026 00:58:03 +0000 (0:00:00.644) 0:00:05.514 ******** 2026-01-09 01:00:13.525524 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.525536 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.525549 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.525561 | orchestrator | 2026-01-09 01:00:13.525574 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-09 01:00:13.525586 | orchestrator | Friday 09 January 2026 00:58:03 +0000 (0:00:00.453) 0:00:05.968 ******** 2026-01-09 01:00:13.525598 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-09 01:00:13.525609 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-09 01:00:13.525623 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-09 01:00:13.525636 | orchestrator | 2026-01-09 01:00:13.525650 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-09 01:00:13.525693 | orchestrator | Friday 09 January 2026 00:58:05 +0000 (0:00:02.230) 0:00:08.198 ******** 2026-01-09 01:00:13.525707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-09 01:00:13.525719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-09 01:00:13.525732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-09 01:00:13.525744 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.525757 | orchestrator | 2026-01-09 01:00:13.525898 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-09 01:00:13.525917 | orchestrator | Friday 09 January 2026 00:58:06 +0000 (0:00:00.643) 0:00:08.842 ******** 2026-01-09 01:00:13.525967 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.525985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.525999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.526011 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.526090 | orchestrator | 2026-01-09 01:00:13.526119 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-09 01:00:13.526132 | orchestrator | Friday 09 January 2026 00:58:07 +0000 (0:00:00.836) 0:00:09.678 ******** 2026-01-09 01:00:13.526149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.526179 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.526194 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.526208 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.526221 | orchestrator | 2026-01-09 01:00:13.526233 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-09 01:00:13.526247 | orchestrator | Friday 09 January 2026 00:58:07 +0000 (0:00:00.361) 0:00:10.039 ******** 2026-01-09 01:00:13.526263 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2073387bea78', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-09 00:58:04.144755', 'end': '2026-01-09 00:58:04.194674', 'delta': '0:00:00.049919', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2073387bea78'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-09 01:00:13.526289 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '59a4f2509739', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-09 00:58:04.943831', 'end': '2026-01-09 00:58:04.977863', 'delta': '0:00:00.034032', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['59a4f2509739'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-09 01:00:13.526349 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c07f2d8f0787', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-09 00:58:05.516302', 'end': '2026-01-09 00:58:05.561580', 'delta': '0:00:00.045278', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c07f2d8f0787'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-09 01:00:13.526369 | orchestrator | 2026-01-09 01:00:13.526376 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-09 01:00:13.526384 | orchestrator | Friday 09 January 2026 00:58:07 +0000 (0:00:00.198) 0:00:10.238 ******** 2026-01-09 01:00:13.526391 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.526399 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.526406 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.526414 | orchestrator | 2026-01-09 01:00:13.526421 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-09 01:00:13.526428 | orchestrator | Friday 09 January 2026 00:58:08 +0000 (0:00:00.485) 0:00:10.723 ******** 2026-01-09 01:00:13.526435 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-09 01:00:13.526443 | orchestrator | 2026-01-09 01:00:13.526450 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-09 01:00:13.526458 | orchestrator | Friday 09 January 2026 00:58:10 +0000 (0:00:01.776) 0:00:12.500 ******** 2026-01-09 01:00:13.526465 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.526472 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.526479 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.526487 | orchestrator | 2026-01-09 01:00:13.526494 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-09 01:00:13.526503 | orchestrator | Friday 09 January 2026 00:58:10 +0000 (0:00:00.324) 0:00:12.824 ******** 2026-01-09 01:00:13.526514 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.526525 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.526536 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.526547 | orchestrator | 2026-01-09 01:00:13.526559 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-09 01:00:13.526570 | orchestrator | Friday 09 January 2026 00:58:10 +0000 (0:00:00.458) 0:00:13.283 ******** 2026-01-09 01:00:13.526582 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.526594 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.526605 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.526618 | orchestrator | 2026-01-09 01:00:13.526630 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-09 01:00:13.526643 | orchestrator | Friday 09 January 2026 00:58:11 +0000 (0:00:00.518) 0:00:13.802 ******** 2026-01-09 01:00:13.526656 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.526668 | orchestrator | 2026-01-09 01:00:13.526679 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-09 01:00:13.526691 | orchestrator | Friday 09 January 2026 00:58:11 +0000 (0:00:00.122) 0:00:13.925 ******** 2026-01-09 01:00:13.526702 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.526714 | orchestrator | 2026-01-09 01:00:13.526726 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-09 01:00:13.526738 | orchestrator | Friday 09 January 2026 00:58:11 +0000 (0:00:00.236) 0:00:14.162 ******** 2026-01-09 01:00:13.526751 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.526763 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.526797 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.526810 | orchestrator | 2026-01-09 01:00:13.526950 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-09 01:00:13.526970 | orchestrator | Friday 09 January 2026 00:58:11 +0000 (0:00:00.281) 0:00:14.443 ******** 2026-01-09 01:00:13.526978 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.526985 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.526993 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.527000 | orchestrator | 2026-01-09 01:00:13.527007 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-09 01:00:13.527015 | orchestrator | Friday 09 January 2026 00:58:12 +0000 (0:00:00.319) 0:00:14.762 ******** 2026-01-09 01:00:13.527022 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.527039 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.527047 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.527054 | orchestrator | 2026-01-09 01:00:13.527062 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-09 01:00:13.527069 | orchestrator | Friday 09 January 2026 00:58:12 +0000 (0:00:00.552) 0:00:15.315 ******** 2026-01-09 01:00:13.527077 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.527084 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.527091 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.527098 | orchestrator | 2026-01-09 01:00:13.527106 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-09 01:00:13.527113 | orchestrator | Friday 09 January 2026 00:58:13 +0000 (0:00:00.359) 0:00:15.675 ******** 2026-01-09 01:00:13.527120 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.527127 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.527141 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.527148 | orchestrator | 2026-01-09 01:00:13.527156 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-09 01:00:13.527163 | orchestrator | Friday 09 January 2026 00:58:13 +0000 (0:00:00.322) 0:00:15.997 ******** 2026-01-09 01:00:13.527170 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.527178 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.527185 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.527192 | orchestrator | 2026-01-09 01:00:13.527269 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-09 01:00:13.527278 | orchestrator | Friday 09 January 2026 00:58:13 +0000 (0:00:00.331) 0:00:16.328 ******** 2026-01-09 01:00:13.527285 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.527292 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.527299 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.527306 | orchestrator | 2026-01-09 01:00:13.527313 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-09 01:00:13.527321 | orchestrator | Friday 09 January 2026 00:58:14 +0000 (0:00:00.533) 0:00:16.861 ******** 2026-01-09 01:00:13.527330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8cf949ba--669c--5e80--aece--22faa35a4e96-osd--block--8cf949ba--669c--5e80--aece--22faa35a4e96', 'dm-uuid-LVM-d5EyZdToMQeCXu6Icc9w5GoEp2mvAXmxP4fK5ixFOffR83oVAWdKmYQD2rNVf4DY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--827da1a7--5d25--503a--baf6--83b57b40e5ca-osd--block--827da1a7--5d25--503a--baf6--83b57b40e5ca', 'dm-uuid-LVM-HpMF01jbYy44XkbQuKpsZ1d1GxpiAuMq4g8mOt1Py7W7M84xblXA7mVX4oxRUSOF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2edbad7c--3e58--5742--8752--3a5bd5d561b5-osd--block--2edbad7c--3e58--5742--8752--3a5bd5d561b5', 'dm-uuid-LVM-BCvLFOrl5lTOIzhIOTqbYHvqKnkSItb99spLEMqIKlY2qQg7ER6TnRnPC3SiFtva'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8cf949ba--669c--5e80--aece--22faa35a4e96-osd--block--8cf949ba--669c--5e80--aece--22faa35a4e96'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uNjwNA-8Yv6-8IKm-DGuK-tYjh-Z96L-T7HQ2l', 'scsi-0QEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766', 'scsi-SQEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--209c90a3--928e--55d9--9ec8--b900c012dcc3-osd--block--209c90a3--928e--55d9--9ec8--b900c012dcc3', 'dm-uuid-LVM-9xt4N27rbyxiupZQkedE4Lk7OpX1MspAm7gSjGxDsIKxpFEw37qNzsGTpQpKqU7Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--827da1a7--5d25--503a--baf6--83b57b40e5ca-osd--block--827da1a7--5d25--503a--baf6--83b57b40e5ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1fSfwk-bcE2-7Eks-1N7R-H6PK-oxLX-5C7l9u', 'scsi-0QEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8', 'scsi-SQEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d', 'scsi-SQEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527614 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.527622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part1', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part14', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part15', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part16', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2edbad7c--3e58--5742--8752--3a5bd5d561b5-osd--block--2edbad7c--3e58--5742--8752--3a5bd5d561b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4i4uff-dwTm-Wsud-EMsq-258C-J0Jy-I6xjCC', 'scsi-0QEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e', 'scsi-SQEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--209c90a3--928e--55d9--9ec8--b900c012dcc3-osd--block--209c90a3--928e--55d9--9ec8--b900c012dcc3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AGGjM5-Oxx1-YEh6-MK2c-18t6-8dd6-hTAbHx', 'scsi-0QEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541', 'scsi-SQEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795', 'scsi-SQEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11533966--1bdf--5daf--a468--949db0b9bc1b-osd--block--11533966--1bdf--5daf--a468--949db0b9bc1b', 'dm-uuid-LVM-iOiF3VRJTfHYsD4EY7pYghHTnllcEYUdFPvIOoX9xhlaY3x7oSbqXnde4RTHI0TL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527721 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.527740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f-osd--block--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f', 'dm-uuid-LVM-AXAdqHL7tb5KEAxwKMqI4uxprAFMOl2FTzl3GE82y70MD6pfqkTzrRzujnEer9HR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-09 01:00:13.527866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--11533966--1bdf--5daf--a468--949db0b9bc1b-osd--block--11533966--1bdf--5daf--a468--949db0b9bc1b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mzD4GG-1RW7-GJRY-B41N-umfk-1UZC-FKsaF6', 'scsi-0QEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88', 'scsi-SQEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f-osd--block--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u55jh8-kpjC-Mvdc-qxrj-8QBS-XbFd-qUxXUi', 'scsi-0QEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338', 'scsi-SQEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595', 'scsi-SQEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-09 01:00:13.527932 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.527941 | orchestrator | 2026-01-09 01:00:13.527951 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-09 01:00:13.527960 | orchestrator | Friday 09 January 2026 00:58:14 +0000 (0:00:00.576) 0:00:17.437 ******** 2026-01-09 01:00:13.527969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8cf949ba--669c--5e80--aece--22faa35a4e96-osd--block--8cf949ba--669c--5e80--aece--22faa35a4e96', 'dm-uuid-LVM-d5EyZdToMQeCXu6Icc9w5GoEp2mvAXmxP4fK5ixFOffR83oVAWdKmYQD2rNVf4DY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.527986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--827da1a7--5d25--503a--baf6--83b57b40e5ca-osd--block--827da1a7--5d25--503a--baf6--83b57b40e5ca', 'dm-uuid-LVM-HpMF01jbYy44XkbQuKpsZ1d1GxpiAuMq4g8mOt1Py7W7M84xblXA7mVX4oxRUSOF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.527996 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528019 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528036 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528071 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528085 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6662bdcf-027c-4645-b737-d68f8c08d7d7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2edbad7c--3e58--5742--8752--3a5bd5d561b5-osd--block--2edbad7c--3e58--5742--8752--3a5bd5d561b5', 'dm-uuid-LVM-BCvLFOrl5lTOIzhIOTqbYHvqKnkSItb99spLEMqIKlY2qQg7ER6TnRnPC3SiFtva'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8cf949ba--669c--5e80--aece--22faa35a4e96-osd--block--8cf949ba--669c--5e80--aece--22faa35a4e96'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uNjwNA-8Yv6-8IKm-DGuK-tYjh-Z96L-T7HQ2l', 'scsi-0QEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766', 'scsi-SQEMU_QEMU_HARDDISK_026602f7-e016-4389-ab85-d50ae4a6b766'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528171 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--209c90a3--928e--55d9--9ec8--b900c012dcc3-osd--block--209c90a3--928e--55d9--9ec8--b900c012dcc3', 'dm-uuid-LVM-9xt4N27rbyxiupZQkedE4Lk7OpX1MspAm7gSjGxDsIKxpFEw37qNzsGTpQpKqU7Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--827da1a7--5d25--503a--baf6--83b57b40e5ca-osd--block--827da1a7--5d25--503a--baf6--83b57b40e5ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1fSfwk-bcE2-7Eks-1N7R-H6PK-oxLX-5C7l9u', 'scsi-0QEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8', 'scsi-SQEMU_QEMU_HARDDISK_34135356-9cda-41c5-bcd3-e499823abbc8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528209 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d', 'scsi-SQEMU_QEMU_HARDDISK_2e0bb2fb-bc7f-4ba8-8e7d-d34ffa91d75d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528227 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528237 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528246 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528275 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528283 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.528290 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528298 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528383 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528418 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part1', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part14', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part15', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part16', 'scsi-SQEMU_QEMU_HARDDISK_2993931b-e75c-4498-90d1-d6a3c3628430-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2edbad7c--3e58--5742--8752--3a5bd5d561b5-osd--block--2edbad7c--3e58--5742--8752--3a5bd5d561b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4i4uff-dwTm-Wsud-EMsq-258C-J0Jy-I6xjCC', 'scsi-0QEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e', 'scsi-SQEMU_QEMU_HARDDISK_a68cfd4f-f534-4fe8-b255-a5dba8df7f3e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528442 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--209c90a3--928e--55d9--9ec8--b900c012dcc3-osd--block--209c90a3--928e--55d9--9ec8--b900c012dcc3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AGGjM5-Oxx1-YEh6-MK2c-18t6-8dd6-hTAbHx', 'scsi-0QEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541', 'scsi-SQEMU_QEMU_HARDDISK_cd74cca7-b2f5-447d-904c-402f09518541'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528450 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795', 'scsi-SQEMU_QEMU_HARDDISK_2fbe7b7d-5687-429f-987a-2175aed9e795'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528466 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11533966--1bdf--5daf--a468--949db0b9bc1b-osd--block--11533966--1bdf--5daf--a468--949db0b9bc1b', 'dm-uuid-LVM-iOiF3VRJTfHYsD4EY7pYghHTnllcEYUdFPvIOoX9xhlaY3x7oSbqXnde4RTHI0TL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528479 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f-osd--block--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f', 'dm-uuid-LVM-AXAdqHL7tb5KEAxwKMqI4uxprAFMOl2FTzl3GE82y70MD6pfqkTzrRzujnEer9HR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528487 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528494 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.528502 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528510 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528521 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528549 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528556 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528588 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ba11957-52c5-41d4-8e8d-198fe339e23b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528602 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--11533966--1bdf--5daf--a468--949db0b9bc1b-osd--block--11533966--1bdf--5daf--a468--949db0b9bc1b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mzD4GG-1RW7-GJRY-B41N-umfk-1UZC-FKsaF6', 'scsi-0QEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88', 'scsi-SQEMU_QEMU_HARDDISK_e30b17a9-b87f-44a9-9e38-be5c8cfc2e88'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528609 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f-osd--block--aa3bcdda--c0e8--51aa--8164--bd5963cdd10f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u55jh8-kpjC-Mvdc-qxrj-8QBS-XbFd-qUxXUi', 'scsi-0QEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338', 'scsi-SQEMU_QEMU_HARDDISK_f9fd9e1f-b101-43e5-b1f4-80d7cd19a338'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528617 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595', 'scsi-SQEMU_QEMU_HARDDISK_058c5952-7557-4cd3-b97b-610df2bea595'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528637 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-09-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-09 01:00:13.528645 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.528652 | orchestrator | 2026-01-09 01:00:13.528660 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-09 01:00:13.528667 | orchestrator | Friday 09 January 2026 00:58:15 +0000 (0:00:00.641) 0:00:18.079 ******** 2026-01-09 01:00:13.528675 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.528683 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.528690 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.528697 | orchestrator | 2026-01-09 01:00:13.528705 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-09 01:00:13.528712 | orchestrator | Friday 09 January 2026 00:58:16 +0000 (0:00:00.719) 0:00:18.799 ******** 2026-01-09 01:00:13.528720 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.528727 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.528734 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.528741 | orchestrator | 2026-01-09 01:00:13.528749 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-09 01:00:13.528756 | orchestrator | Friday 09 January 2026 00:58:16 +0000 (0:00:00.553) 0:00:19.352 ******** 2026-01-09 01:00:13.528763 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.528771 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.528807 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.528820 | orchestrator | 2026-01-09 01:00:13.528828 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-09 01:00:13.528835 | orchestrator | Friday 09 January 2026 00:58:17 +0000 (0:00:00.802) 0:00:20.155 ******** 2026-01-09 01:00:13.528842 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.528850 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.528857 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.528864 | orchestrator | 2026-01-09 01:00:13.528872 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-09 01:00:13.528879 | orchestrator | Friday 09 January 2026 00:58:17 +0000 (0:00:00.327) 0:00:20.483 ******** 2026-01-09 01:00:13.528886 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.528893 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.528900 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.528908 | orchestrator | 2026-01-09 01:00:13.528915 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-09 01:00:13.528922 | orchestrator | Friday 09 January 2026 00:58:18 +0000 (0:00:00.420) 0:00:20.903 ******** 2026-01-09 01:00:13.528929 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.528936 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.528944 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.528951 | orchestrator | 2026-01-09 01:00:13.528958 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-09 01:00:13.528965 | orchestrator | Friday 09 January 2026 00:58:18 +0000 (0:00:00.526) 0:00:21.430 ******** 2026-01-09 01:00:13.528972 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-09 01:00:13.528980 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-09 01:00:13.528987 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-09 01:00:13.528999 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-09 01:00:13.529007 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-09 01:00:13.529014 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-09 01:00:13.529021 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-09 01:00:13.529028 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-09 01:00:13.529035 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-09 01:00:13.529042 | orchestrator | 2026-01-09 01:00:13.529050 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-09 01:00:13.529057 | orchestrator | Friday 09 January 2026 00:58:19 +0000 (0:00:00.881) 0:00:22.312 ******** 2026-01-09 01:00:13.529064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-09 01:00:13.529072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-09 01:00:13.529079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-09 01:00:13.529086 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.529093 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-09 01:00:13.529100 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-09 01:00:13.529107 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-09 01:00:13.529114 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.529122 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-09 01:00:13.529129 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-09 01:00:13.529136 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-09 01:00:13.529143 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.529150 | orchestrator | 2026-01-09 01:00:13.529157 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-09 01:00:13.529167 | orchestrator | Friday 09 January 2026 00:58:20 +0000 (0:00:00.355) 0:00:22.667 ******** 2026-01-09 01:00:13.529185 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 01:00:13.529196 | orchestrator | 2026-01-09 01:00:13.529208 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-09 01:00:13.529221 | orchestrator | Friday 09 January 2026 00:58:20 +0000 (0:00:00.727) 0:00:23.395 ******** 2026-01-09 01:00:13.529238 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.529251 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.529263 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.529274 | orchestrator | 2026-01-09 01:00:13.529285 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-09 01:00:13.529296 | orchestrator | Friday 09 January 2026 00:58:21 +0000 (0:00:00.330) 0:00:23.725 ******** 2026-01-09 01:00:13.529307 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.529318 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.529329 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.529340 | orchestrator | 2026-01-09 01:00:13.529352 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-09 01:00:13.529365 | orchestrator | Friday 09 January 2026 00:58:21 +0000 (0:00:00.328) 0:00:24.054 ******** 2026-01-09 01:00:13.529376 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.529387 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.529399 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:00:13.529410 | orchestrator | 2026-01-09 01:00:13.529421 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-09 01:00:13.529432 | orchestrator | Friday 09 January 2026 00:58:21 +0000 (0:00:00.326) 0:00:24.381 ******** 2026-01-09 01:00:13.529443 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.529454 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.529467 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.529488 | orchestrator | 2026-01-09 01:00:13.529500 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-09 01:00:13.529512 | orchestrator | Friday 09 January 2026 00:58:22 +0000 (0:00:00.659) 0:00:25.041 ******** 2026-01-09 01:00:13.529524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 01:00:13.529537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 01:00:13.529549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 01:00:13.529561 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.529572 | orchestrator | 2026-01-09 01:00:13.529584 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-09 01:00:13.529596 | orchestrator | Friday 09 January 2026 00:58:22 +0000 (0:00:00.395) 0:00:25.436 ******** 2026-01-09 01:00:13.529608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 01:00:13.529620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 01:00:13.529631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 01:00:13.529643 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.529654 | orchestrator | 2026-01-09 01:00:13.529667 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-09 01:00:13.529679 | orchestrator | Friday 09 January 2026 00:58:23 +0000 (0:00:00.385) 0:00:25.822 ******** 2026-01-09 01:00:13.529691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-09 01:00:13.529702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-09 01:00:13.529715 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-09 01:00:13.529728 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.529739 | orchestrator | 2026-01-09 01:00:13.529751 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-09 01:00:13.529763 | orchestrator | Friday 09 January 2026 00:58:23 +0000 (0:00:00.404) 0:00:26.226 ******** 2026-01-09 01:00:13.529836 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:00:13.529852 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:00:13.529865 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:00:13.529877 | orchestrator | 2026-01-09 01:00:13.529889 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-09 01:00:13.529902 | orchestrator | Friday 09 January 2026 00:58:24 +0000 (0:00:00.324) 0:00:26.550 ******** 2026-01-09 01:00:13.529915 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-09 01:00:13.529928 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-09 01:00:13.529939 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-09 01:00:13.529951 | orchestrator | 2026-01-09 01:00:13.529963 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-09 01:00:13.529976 | orchestrator | Friday 09 January 2026 00:58:24 +0000 (0:00:00.503) 0:00:27.054 ******** 2026-01-09 01:00:13.529989 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-09 01:00:13.530001 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-09 01:00:13.530072 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-09 01:00:13.530083 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-09 01:00:13.530090 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-09 01:00:13.530098 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-09 01:00:13.530105 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-09 01:00:13.530112 | orchestrator | 2026-01-09 01:00:13.530120 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-09 01:00:13.530127 | orchestrator | Friday 09 January 2026 00:58:25 +0000 (0:00:01.074) 0:00:28.128 ******** 2026-01-09 01:00:13.530134 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-09 01:00:13.530154 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-09 01:00:13.530167 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-09 01:00:13.530175 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-09 01:00:13.530182 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-09 01:00:13.530189 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-09 01:00:13.530206 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-09 01:00:13.530213 | orchestrator | 2026-01-09 01:00:13.530221 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-09 01:00:13.530228 | orchestrator | Friday 09 January 2026 00:58:27 +0000 (0:00:02.061) 0:00:30.190 ******** 2026-01-09 01:00:13.530235 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:00:13.530242 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:00:13.530250 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-09 01:00:13.530257 | orchestrator | 2026-01-09 01:00:13.530264 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-09 01:00:13.530271 | orchestrator | Friday 09 January 2026 00:58:28 +0000 (0:00:00.385) 0:00:30.576 ******** 2026-01-09 01:00:13.530280 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-09 01:00:13.530291 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-09 01:00:13.530299 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-09 01:00:13.530306 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-09 01:00:13.530313 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-09 01:00:13.530321 | orchestrator | 2026-01-09 01:00:13.530328 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-09 01:00:13.530336 | orchestrator | Friday 09 January 2026 00:59:14 +0000 (0:00:46.205) 0:01:16.781 ******** 2026-01-09 01:00:13.530343 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530350 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530358 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530365 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530372 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530379 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530386 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-09 01:00:13.530399 | orchestrator | 2026-01-09 01:00:13.530406 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-09 01:00:13.530413 | orchestrator | Friday 09 January 2026 00:59:40 +0000 (0:00:26.646) 0:01:43.428 ******** 2026-01-09 01:00:13.530421 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530428 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530435 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530442 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530450 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530457 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530464 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-09 01:00:13.530471 | orchestrator | 2026-01-09 01:00:13.530479 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-09 01:00:13.530486 | orchestrator | Friday 09 January 2026 00:59:53 +0000 (0:00:13.020) 0:01:56.449 ******** 2026-01-09 01:00:13.530493 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530504 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-09 01:00:13.530511 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-09 01:00:13.530518 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530526 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-09 01:00:13.530537 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-09 01:00:13.530545 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530552 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-09 01:00:13.530559 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-09 01:00:13.530566 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530573 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-09 01:00:13.530580 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-09 01:00:13.530587 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530595 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-09 01:00:13.530602 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-09 01:00:13.530609 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-09 01:00:13.530616 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-09 01:00:13.530627 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-09 01:00:13.530640 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-09 01:00:13.530653 | orchestrator | 2026-01-09 01:00:13.530665 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:00:13.530677 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-09 01:00:13.530691 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-09 01:00:13.530703 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-09 01:00:13.530722 | orchestrator | 2026-01-09 01:00:13.530734 | orchestrator | 2026-01-09 01:00:13.530746 | orchestrator | 2026-01-09 01:00:13.530757 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:00:13.530770 | orchestrator | Friday 09 January 2026 01:00:12 +0000 (0:00:18.619) 0:02:15.069 ******** 2026-01-09 01:00:13.530807 | orchestrator | =============================================================================== 2026-01-09 01:00:13.530819 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.21s 2026-01-09 01:00:13.530830 | orchestrator | generate keys ---------------------------------------------------------- 26.65s 2026-01-09 01:00:13.530841 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.62s 2026-01-09 01:00:13.530851 | orchestrator | get keys from monitors ------------------------------------------------- 13.02s 2026-01-09 01:00:13.530863 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.23s 2026-01-09 01:00:13.530875 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.06s 2026-01-09 01:00:13.530886 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.78s 2026-01-09 01:00:13.530897 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.07s 2026-01-09 01:00:13.530908 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.92s 2026-01-09 01:00:13.530920 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.88s 2026-01-09 01:00:13.530931 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.84s 2026-01-09 01:00:13.530941 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.80s 2026-01-09 01:00:13.530952 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2026-01-09 01:00:13.530964 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2026-01-09 01:00:13.530975 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.66s 2026-01-09 01:00:13.530986 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2026-01-09 01:00:13.530997 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2026-01-09 01:00:13.531009 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2026-01-09 01:00:13.531020 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.64s 2026-01-09 01:00:13.531031 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.64s 2026-01-09 01:00:13.531043 | orchestrator | 2026-01-09 01:00:13 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state STARTED 2026-01-09 01:00:13.531055 | orchestrator | 2026-01-09 01:00:13 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:13.531075 | orchestrator | 2026-01-09 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:16.580985 | orchestrator | 2026-01-09 01:00:16.581128 | orchestrator | 2026-01-09 01:00:16 | INFO  | Task 6bec8f75-f05a-4450-bdf6-07c8781f2195 is in state SUCCESS 2026-01-09 01:00:16.581778 | orchestrator | 2026-01-09 01:00:16.581853 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:00:16.581862 | orchestrator | 2026-01-09 01:00:16.581869 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:00:16.581875 | orchestrator | Friday 09 January 2026 00:58:32 +0000 (0:00:00.256) 0:00:00.256 ******** 2026-01-09 01:00:16.581882 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.581890 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.581897 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.581905 | orchestrator | 2026-01-09 01:00:16.581911 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:00:16.581918 | orchestrator | Friday 09 January 2026 00:58:32 +0000 (0:00:00.323) 0:00:00.580 ******** 2026-01-09 01:00:16.581926 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-09 01:00:16.582197 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-09 01:00:16.582204 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-09 01:00:16.582208 | orchestrator | 2026-01-09 01:00:16.582213 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-09 01:00:16.582217 | orchestrator | 2026-01-09 01:00:16.582221 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-09 01:00:16.582225 | orchestrator | Friday 09 January 2026 00:58:33 +0000 (0:00:00.444) 0:00:01.024 ******** 2026-01-09 01:00:16.582229 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:00:16.582234 | orchestrator | 2026-01-09 01:00:16.582238 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-09 01:00:16.582242 | orchestrator | Friday 09 January 2026 00:58:33 +0000 (0:00:00.479) 0:00:01.504 ******** 2026-01-09 01:00:16.582253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 01:00:16.582291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 01:00:16.582306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 01:00:16.582313 | orchestrator | 2026-01-09 01:00:16.582319 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-09 01:00:16.582325 | orchestrator | Friday 09 January 2026 00:58:35 +0000 (0:00:01.328) 0:00:02.832 ******** 2026-01-09 01:00:16.582332 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.582337 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.582340 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.582344 | orchestrator | 2026-01-09 01:00:16.582352 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-09 01:00:16.582356 | orchestrator | Friday 09 January 2026 00:58:35 +0000 (0:00:00.570) 0:00:03.402 ******** 2026-01-09 01:00:16.582363 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-09 01:00:16.582374 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-09 01:00:16.582378 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-09 01:00:16.582382 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-09 01:00:16.582386 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-09 01:00:16.582390 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-09 01:00:16.582393 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-09 01:00:16.582397 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-09 01:00:16.582401 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-09 01:00:16.582405 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-09 01:00:16.582408 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-09 01:00:16.582412 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-09 01:00:16.582416 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-09 01:00:16.582419 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-09 01:00:16.582423 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-09 01:00:16.582427 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-09 01:00:16.582431 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-09 01:00:16.582434 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-09 01:00:16.582438 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-09 01:00:16.582463 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-09 01:00:16.582467 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-09 01:00:16.582471 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-09 01:00:16.582474 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-09 01:00:16.582478 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-09 01:00:16.582483 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-09 01:00:16.582489 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-09 01:00:16.582493 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-09 01:00:16.582497 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-09 01:00:16.582501 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-09 01:00:16.582505 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-09 01:00:16.582508 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-09 01:00:16.582520 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-09 01:00:16.582527 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-09 01:00:16.582536 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-09 01:00:16.582542 | orchestrator | 2026-01-09 01:00:16.582548 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-09 01:00:16.582554 | orchestrator | Friday 09 January 2026 00:58:36 +0000 (0:00:00.726) 0:00:04.129 ******** 2026-01-09 01:00:16.582559 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.582565 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.582571 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.582577 | orchestrator | 2026-01-09 01:00:16.582588 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-09 01:00:16.582594 | orchestrator | Friday 09 January 2026 00:58:36 +0000 (0:00:00.327) 0:00:04.456 ******** 2026-01-09 01:00:16.582600 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.582607 | orchestrator | 2026-01-09 01:00:16.582618 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-09 01:00:16.582625 | orchestrator | Friday 09 January 2026 00:58:36 +0000 (0:00:00.154) 0:00:04.611 ******** 2026-01-09 01:00:16.582632 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.582639 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.582645 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.582652 | orchestrator | 2026-01-09 01:00:16.582658 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-09 01:00:16.582662 | orchestrator | Friday 09 January 2026 00:58:37 +0000 (0:00:00.492) 0:00:05.103 ******** 2026-01-09 01:00:16.582666 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.582670 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.582674 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.582678 | orchestrator | 2026-01-09 01:00:16.582682 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-09 01:00:16.582686 | orchestrator | Friday 09 January 2026 00:58:37 +0000 (0:00:00.309) 0:00:05.413 ******** 2026-01-09 01:00:16.582689 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.582693 | orchestrator | 2026-01-09 01:00:16.582697 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-09 01:00:16.582701 | orchestrator | Friday 09 January 2026 00:58:37 +0000 (0:00:00.129) 0:00:05.542 ******** 2026-01-09 01:00:16.582705 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.582709 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.582712 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.582716 | orchestrator | 2026-01-09 01:00:16.582720 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-09 01:00:16.582724 | orchestrator | Friday 09 January 2026 00:58:38 +0000 (0:00:00.358) 0:00:05.901 ******** 2026-01-09 01:00:16.582728 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.582731 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.582735 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.582739 | orchestrator | 2026-01-09 01:00:16.582743 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-09 01:00:16.582747 | orchestrator | Friday 09 January 2026 00:58:38 +0000 (0:00:00.306) 0:00:06.207 ******** 2026-01-09 01:00:16.582750 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.582754 | orchestrator | 2026-01-09 01:00:16.582758 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-09 01:00:16.582762 | orchestrator | Friday 09 January 2026 00:58:38 +0000 (0:00:00.331) 0:00:06.538 ******** 2026-01-09 01:00:16.582765 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.582775 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.582779 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.582808 | orchestrator | 2026-01-09 01:00:16.582813 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-09 01:00:16.582818 | orchestrator | Friday 09 January 2026 00:58:39 +0000 (0:00:00.294) 0:00:06.832 ******** 2026-01-09 01:00:16.582822 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.582827 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.582831 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.582836 | orchestrator | 2026-01-09 01:00:16.582840 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-09 01:00:16.582845 | orchestrator | Friday 09 January 2026 00:58:39 +0000 (0:00:00.325) 0:00:07.158 ******** 2026-01-09 01:00:16.582849 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.582855 | orchestrator | 2026-01-09 01:00:16.582861 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-09 01:00:16.582867 | orchestrator | Friday 09 January 2026 00:58:39 +0000 (0:00:00.161) 0:00:07.319 ******** 2026-01-09 01:00:16.582877 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.582886 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.582892 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.582898 | orchestrator | 2026-01-09 01:00:16.582905 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-09 01:00:16.582911 | orchestrator | Friday 09 January 2026 00:58:39 +0000 (0:00:00.300) 0:00:07.620 ******** 2026-01-09 01:00:16.582917 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.582924 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.582930 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.582937 | orchestrator | 2026-01-09 01:00:16.582942 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-09 01:00:16.582949 | orchestrator | Friday 09 January 2026 00:58:40 +0000 (0:00:00.493) 0:00:08.113 ******** 2026-01-09 01:00:16.582956 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.582962 | orchestrator | 2026-01-09 01:00:16.582968 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-09 01:00:16.582975 | orchestrator | Friday 09 January 2026 00:58:40 +0000 (0:00:00.142) 0:00:08.256 ******** 2026-01-09 01:00:16.582982 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.582989 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.582996 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.583002 | orchestrator | 2026-01-09 01:00:16.583009 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-09 01:00:16.583016 | orchestrator | Friday 09 January 2026 00:58:40 +0000 (0:00:00.323) 0:00:08.580 ******** 2026-01-09 01:00:16.583021 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.583025 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.583030 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.583034 | orchestrator | 2026-01-09 01:00:16.583039 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-09 01:00:16.583043 | orchestrator | Friday 09 January 2026 00:58:41 +0000 (0:00:00.319) 0:00:08.899 ******** 2026-01-09 01:00:16.583048 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583052 | orchestrator | 2026-01-09 01:00:16.583057 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-09 01:00:16.583062 | orchestrator | Friday 09 January 2026 00:58:41 +0000 (0:00:00.147) 0:00:09.047 ******** 2026-01-09 01:00:16.583067 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583076 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.583080 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.583085 | orchestrator | 2026-01-09 01:00:16.583090 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-09 01:00:16.583100 | orchestrator | Friday 09 January 2026 00:58:41 +0000 (0:00:00.300) 0:00:09.347 ******** 2026-01-09 01:00:16.583105 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.583110 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.583126 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.583132 | orchestrator | 2026-01-09 01:00:16.583138 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-09 01:00:16.583144 | orchestrator | Friday 09 January 2026 00:58:42 +0000 (0:00:00.533) 0:00:09.881 ******** 2026-01-09 01:00:16.583150 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583155 | orchestrator | 2026-01-09 01:00:16.583161 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-09 01:00:16.583166 | orchestrator | Friday 09 January 2026 00:58:42 +0000 (0:00:00.141) 0:00:10.022 ******** 2026-01-09 01:00:16.583173 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583179 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.583184 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.583190 | orchestrator | 2026-01-09 01:00:16.583196 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-09 01:00:16.583203 | orchestrator | Friday 09 January 2026 00:58:42 +0000 (0:00:00.278) 0:00:10.301 ******** 2026-01-09 01:00:16.583209 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.583216 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.583222 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.583228 | orchestrator | 2026-01-09 01:00:16.583234 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-09 01:00:16.583240 | orchestrator | Friday 09 January 2026 00:58:42 +0000 (0:00:00.330) 0:00:10.632 ******** 2026-01-09 01:00:16.583244 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583248 | orchestrator | 2026-01-09 01:00:16.583251 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-09 01:00:16.583257 | orchestrator | Friday 09 January 2026 00:58:43 +0000 (0:00:00.150) 0:00:10.782 ******** 2026-01-09 01:00:16.583263 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583272 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.583280 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.583285 | orchestrator | 2026-01-09 01:00:16.583291 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-09 01:00:16.583298 | orchestrator | Friday 09 January 2026 00:58:43 +0000 (0:00:00.294) 0:00:11.077 ******** 2026-01-09 01:00:16.583304 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.583311 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.583317 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.583323 | orchestrator | 2026-01-09 01:00:16.583327 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-09 01:00:16.583330 | orchestrator | Friday 09 January 2026 00:58:43 +0000 (0:00:00.570) 0:00:11.648 ******** 2026-01-09 01:00:16.583334 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583338 | orchestrator | 2026-01-09 01:00:16.583342 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-09 01:00:16.583346 | orchestrator | Friday 09 January 2026 00:58:44 +0000 (0:00:00.144) 0:00:11.792 ******** 2026-01-09 01:00:16.583349 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583353 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.583357 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.583361 | orchestrator | 2026-01-09 01:00:16.583365 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-09 01:00:16.583368 | orchestrator | Friday 09 January 2026 00:58:44 +0000 (0:00:00.345) 0:00:12.137 ******** 2026-01-09 01:00:16.583372 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:00:16.583376 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:00:16.583380 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:00:16.583384 | orchestrator | 2026-01-09 01:00:16.583387 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-09 01:00:16.583391 | orchestrator | Friday 09 January 2026 00:58:44 +0000 (0:00:00.323) 0:00:12.461 ******** 2026-01-09 01:00:16.583395 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583399 | orchestrator | 2026-01-09 01:00:16.583403 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-09 01:00:16.583413 | orchestrator | Friday 09 January 2026 00:58:44 +0000 (0:00:00.132) 0:00:12.593 ******** 2026-01-09 01:00:16.583416 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583420 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.583424 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.583428 | orchestrator | 2026-01-09 01:00:16.583432 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-09 01:00:16.583435 | orchestrator | Friday 09 January 2026 00:58:45 +0000 (0:00:00.534) 0:00:13.127 ******** 2026-01-09 01:00:16.583439 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:00:16.583443 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:00:16.583447 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:00:16.583451 | orchestrator | 2026-01-09 01:00:16.583454 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-09 01:00:16.583458 | orchestrator | Friday 09 January 2026 00:58:47 +0000 (0:00:01.564) 0:00:14.692 ******** 2026-01-09 01:00:16.583462 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-09 01:00:16.583466 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-09 01:00:16.583470 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-09 01:00:16.583474 | orchestrator | 2026-01-09 01:00:16.583478 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-09 01:00:16.583482 | orchestrator | Friday 09 January 2026 00:58:49 +0000 (0:00:01.983) 0:00:16.676 ******** 2026-01-09 01:00:16.583485 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-09 01:00:16.583494 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-09 01:00:16.583498 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-09 01:00:16.583502 | orchestrator | 2026-01-09 01:00:16.583505 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-09 01:00:16.583513 | orchestrator | Friday 09 January 2026 00:58:51 +0000 (0:00:02.267) 0:00:18.943 ******** 2026-01-09 01:00:16.583517 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-09 01:00:16.583521 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-09 01:00:16.583525 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-09 01:00:16.583528 | orchestrator | 2026-01-09 01:00:16.583532 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-09 01:00:16.583536 | orchestrator | Friday 09 January 2026 00:58:53 +0000 (0:00:02.137) 0:00:21.081 ******** 2026-01-09 01:00:16.583540 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583543 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.583547 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.583551 | orchestrator | 2026-01-09 01:00:16.583555 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-09 01:00:16.583558 | orchestrator | Friday 09 January 2026 00:58:53 +0000 (0:00:00.311) 0:00:21.393 ******** 2026-01-09 01:00:16.583562 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583566 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.583569 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.583573 | orchestrator | 2026-01-09 01:00:16.583577 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-09 01:00:16.583581 | orchestrator | Friday 09 January 2026 00:58:54 +0000 (0:00:00.299) 0:00:21.692 ******** 2026-01-09 01:00:16.583584 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:00:16.583588 | orchestrator | 2026-01-09 01:00:16.583592 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-09 01:00:16.583599 | orchestrator | Friday 09 January 2026 00:58:54 +0000 (0:00:00.844) 0:00:22.537 ******** 2026-01-09 01:00:16.583606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 01:00:16.583620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 01:00:16.583628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 01:00:16.583632 | orchestrator | 2026-01-09 01:00:16.583636 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-09 01:00:16.583654 | orchestrator | Friday 09 January 2026 00:58:56 +0000 (0:00:01.720) 0:00:24.257 ******** 2026-01-09 01:00:16.583666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-09 01:00:16.583677 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-09 01:00:16.583699 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.583706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-09 01:00:16.583718 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.583724 | orchestrator | 2026-01-09 01:00:16.583730 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-09 01:00:16.583736 | orchestrator | Friday 09 January 2026 00:58:57 +0000 (0:00:00.640) 0:00:24.897 ******** 2026-01-09 01:00:16.583751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-09 01:00:16.583756 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-09 01:00:16.583767 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.583778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-09 01:00:16.583808 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.583812 | orchestrator | 2026-01-09 01:00:16.583816 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-09 01:00:16.583820 | orchestrator | Friday 09 January 2026 00:58:58 +0000 (0:00:00.831) 0:00:25.729 ******** 2026-01-09 01:00:16.583824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 01:00:16.583837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 01:00:16.583846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-09 01:00:16.583851 | orchestrator | 2026-01-09 01:00:16.583855 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-09 01:00:16.583858 | orchestrator | Friday 09 January 2026 00:58:59 +0000 (0:00:01.637) 0:00:27.367 ******** 2026-01-09 01:00:16.583862 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:00:16.583866 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:00:16.583870 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:00:16.583873 | orchestrator | 2026-01-09 01:00:16.583877 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-09 01:00:16.583881 | orchestrator | Friday 09 January 2026 00:59:00 +0000 (0:00:00.323) 0:00:27.691 ******** 2026-01-09 01:00:16.583887 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:00:16.583891 | orchestrator | 2026-01-09 01:00:16.583895 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-09 01:00:16.583902 | orchestrator | Friday 09 January 2026 00:59:00 +0000 (0:00:00.607) 0:00:28.298 ******** 2026-01-09 01:00:16.583910 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:00:16.583914 | orchestrator | 2026-01-09 01:00:16.583918 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-09 01:00:16.583922 | orchestrator | Friday 09 January 2026 00:59:03 +0000 (0:00:02.665) 0:00:30.963 ******** 2026-01-09 01:00:16.583925 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:00:16.583929 | orchestrator | 2026-01-09 01:00:16.583933 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-09 01:00:16.583937 | orchestrator | Friday 09 January 2026 00:59:06 +0000 (0:00:02.916) 0:00:33.880 ******** 2026-01-09 01:00:16.583940 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:00:16.583944 | orchestrator | 2026-01-09 01:00:16.583948 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-09 01:00:16.583952 | orchestrator | Friday 09 January 2026 00:59:23 +0000 (0:00:17.629) 0:00:51.509 ******** 2026-01-09 01:00:16.583956 | orchestrator | 2026-01-09 01:00:16.583959 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-09 01:00:16.583963 | orchestrator | Friday 09 January 2026 00:59:23 +0000 (0:00:00.068) 0:00:51.578 ******** 2026-01-09 01:00:16.583967 | orchestrator | 2026-01-09 01:00:16.583971 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-09 01:00:16.583974 | orchestrator | Friday 09 January 2026 00:59:23 +0000 (0:00:00.070) 0:00:51.649 ******** 2026-01-09 01:00:16.583978 | orchestrator | 2026-01-09 01:00:16.583982 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-09 01:00:16.583986 | orchestrator | Friday 09 January 2026 00:59:24 +0000 (0:00:00.068) 0:00:51.718 ******** 2026-01-09 01:00:16.583989 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:00:16.583993 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:00:16.583997 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:00:16.584003 | orchestrator | 2026-01-09 01:00:16.584011 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:00:16.584020 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-09 01:00:16.584028 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-09 01:00:16.584033 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-09 01:00:16.584039 | orchestrator | 2026-01-09 01:00:16.584045 | orchestrator | 2026-01-09 01:00:16.584051 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:00:16.584057 | orchestrator | Friday 09 January 2026 01:00:14 +0000 (0:00:49.970) 0:01:41.689 ******** 2026-01-09 01:00:16.584063 | orchestrator | =============================================================================== 2026-01-09 01:00:16.584069 | orchestrator | horizon : Restart horizon container ------------------------------------ 49.97s 2026-01-09 01:00:16.584074 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.63s 2026-01-09 01:00:16.584080 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.92s 2026-01-09 01:00:16.584085 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.67s 2026-01-09 01:00:16.584091 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.27s 2026-01-09 01:00:16.584097 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.14s 2026-01-09 01:00:16.584103 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.98s 2026-01-09 01:00:16.584108 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.72s 2026-01-09 01:00:16.584114 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.64s 2026-01-09 01:00:16.584120 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.56s 2026-01-09 01:00:16.584132 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.33s 2026-01-09 01:00:16.584138 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.84s 2026-01-09 01:00:16.584144 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.83s 2026-01-09 01:00:16.584150 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-01-09 01:00:16.584156 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.64s 2026-01-09 01:00:16.584163 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-01-09 01:00:16.584169 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-01-09 01:00:16.584175 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.57s 2026-01-09 01:00:16.584181 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2026-01-09 01:00:16.584188 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-01-09 01:00:16.585087 | orchestrator | 2026-01-09 01:00:16 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:16.586240 | orchestrator | 2026-01-09 01:00:16 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:16.586288 | orchestrator | 2026-01-09 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:19.635929 | orchestrator | 2026-01-09 01:00:19 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:19.641383 | orchestrator | 2026-01-09 01:00:19 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:19.641482 | orchestrator | 2026-01-09 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:22.687673 | orchestrator | 2026-01-09 01:00:22 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:22.689831 | orchestrator | 2026-01-09 01:00:22 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:22.689865 | orchestrator | 2026-01-09 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:25.742533 | orchestrator | 2026-01-09 01:00:25 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:25.744214 | orchestrator | 2026-01-09 01:00:25 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:25.744517 | orchestrator | 2026-01-09 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:28.787419 | orchestrator | 2026-01-09 01:00:28 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:28.789464 | orchestrator | 2026-01-09 01:00:28 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:28.789921 | orchestrator | 2026-01-09 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:31.843585 | orchestrator | 2026-01-09 01:00:31 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:31.847731 | orchestrator | 2026-01-09 01:00:31 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:31.847812 | orchestrator | 2026-01-09 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:34.903328 | orchestrator | 2026-01-09 01:00:34 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:34.905793 | orchestrator | 2026-01-09 01:00:34 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:34.906534 | orchestrator | 2026-01-09 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:37.965572 | orchestrator | 2026-01-09 01:00:37 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:37.968022 | orchestrator | 2026-01-09 01:00:37 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:37.968117 | orchestrator | 2026-01-09 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:41.017282 | orchestrator | 2026-01-09 01:00:41 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:41.019521 | orchestrator | 2026-01-09 01:00:41 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:41.019760 | orchestrator | 2026-01-09 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:44.075023 | orchestrator | 2026-01-09 01:00:44 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:44.078147 | orchestrator | 2026-01-09 01:00:44 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:44.078208 | orchestrator | 2026-01-09 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:47.131225 | orchestrator | 2026-01-09 01:00:47 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:47.133925 | orchestrator | 2026-01-09 01:00:47 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:47.134009 | orchestrator | 2026-01-09 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:50.192956 | orchestrator | 2026-01-09 01:00:50 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:50.194214 | orchestrator | 2026-01-09 01:00:50 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:50.194262 | orchestrator | 2026-01-09 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:53.240573 | orchestrator | 2026-01-09 01:00:53 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state STARTED 2026-01-09 01:00:53.241338 | orchestrator | 2026-01-09 01:00:53 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:53.242339 | orchestrator | 2026-01-09 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:56.290746 | orchestrator | 2026-01-09 01:00:56 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:00:56.291137 | orchestrator | 2026-01-09 01:00:56 | INFO  | Task 232c592f-563d-47af-ba89-46d8e56310c1 is in state SUCCESS 2026-01-09 01:00:56.292111 | orchestrator | 2026-01-09 01:00:56 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:56.292511 | orchestrator | 2026-01-09 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:00:59.343165 | orchestrator | 2026-01-09 01:00:59 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:00:59.345024 | orchestrator | 2026-01-09 01:00:59 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:00:59.345069 | orchestrator | 2026-01-09 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:02.397183 | orchestrator | 2026-01-09 01:01:02 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:02.399151 | orchestrator | 2026-01-09 01:01:02 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:01:02.399243 | orchestrator | 2026-01-09 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:05.450995 | orchestrator | 2026-01-09 01:01:05 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:05.455448 | orchestrator | 2026-01-09 01:01:05 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:01:05.455562 | orchestrator | 2026-01-09 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:08.497781 | orchestrator | 2026-01-09 01:01:08 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:08.499643 | orchestrator | 2026-01-09 01:01:08 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:01:08.499716 | orchestrator | 2026-01-09 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:11.539529 | orchestrator | 2026-01-09 01:01:11 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:11.541599 | orchestrator | 2026-01-09 01:01:11 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:01:11.541663 | orchestrator | 2026-01-09 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:14.575464 | orchestrator | 2026-01-09 01:01:14 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:14.576602 | orchestrator | 2026-01-09 01:01:14 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:01:14.576646 | orchestrator | 2026-01-09 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:17.624333 | orchestrator | 2026-01-09 01:01:17 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:17.624474 | orchestrator | 2026-01-09 01:01:17 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:01:17.624654 | orchestrator | 2026-01-09 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:20.664294 | orchestrator | 2026-01-09 01:01:20 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:20.666782 | orchestrator | 2026-01-09 01:01:20 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state STARTED 2026-01-09 01:01:20.666849 | orchestrator | 2026-01-09 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:23.795160 | orchestrator | 2026-01-09 01:01:23.795306 | orchestrator | 2026-01-09 01:01:23.795319 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-09 01:01:23.795327 | orchestrator | 2026-01-09 01:01:23.795333 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-09 01:01:23.795340 | orchestrator | Friday 09 January 2026 01:00:17 +0000 (0:00:00.165) 0:00:00.165 ******** 2026-01-09 01:01:23.795346 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-09 01:01:23.795354 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.795360 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.795366 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-09 01:01:23.795373 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.795379 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-09 01:01:23.795679 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-09 01:01:23.795702 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-09 01:01:23.795712 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-09 01:01:23.795721 | orchestrator | 2026-01-09 01:01:23.795731 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-09 01:01:23.795764 | orchestrator | Friday 09 January 2026 01:00:22 +0000 (0:00:05.252) 0:00:05.417 ******** 2026-01-09 01:01:23.795774 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-09 01:01:23.795780 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.795786 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.795792 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-09 01:01:23.795798 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.795803 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-09 01:01:23.795809 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-09 01:01:23.795815 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-09 01:01:23.795821 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-09 01:01:23.795827 | orchestrator | 2026-01-09 01:01:23.795833 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-09 01:01:23.795839 | orchestrator | Friday 09 January 2026 01:00:27 +0000 (0:00:04.690) 0:00:10.108 ******** 2026-01-09 01:01:23.795846 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-09 01:01:23.795852 | orchestrator | 2026-01-09 01:01:23.795858 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-09 01:01:23.795864 | orchestrator | Friday 09 January 2026 01:00:28 +0000 (0:00:01.046) 0:00:11.154 ******** 2026-01-09 01:01:23.795870 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-09 01:01:23.795876 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.795883 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.795889 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-09 01:01:23.795894 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.795900 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-09 01:01:23.795906 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-09 01:01:23.795912 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-09 01:01:23.795918 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-09 01:01:23.795985 | orchestrator | 2026-01-09 01:01:23.795991 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-09 01:01:23.795997 | orchestrator | Friday 09 January 2026 01:00:42 +0000 (0:00:13.890) 0:00:25.045 ******** 2026-01-09 01:01:23.796003 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-09 01:01:23.796010 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-09 01:01:23.796021 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-09 01:01:23.796030 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-09 01:01:23.796087 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-09 01:01:23.796099 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-09 01:01:23.796110 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-09 01:01:23.796131 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-09 01:01:23.796139 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-09 01:01:23.796145 | orchestrator | 2026-01-09 01:01:23.796151 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-09 01:01:23.796157 | orchestrator | Friday 09 January 2026 01:00:46 +0000 (0:00:04.126) 0:00:29.171 ******** 2026-01-09 01:01:23.796164 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-09 01:01:23.796171 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.796177 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.796188 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-09 01:01:23.796195 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-09 01:01:23.796201 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-09 01:01:23.796206 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-09 01:01:23.796212 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-09 01:01:23.796218 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-09 01:01:23.796224 | orchestrator | 2026-01-09 01:01:23.796230 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:01:23.796236 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:01:23.796244 | orchestrator | 2026-01-09 01:01:23.796250 | orchestrator | 2026-01-09 01:01:23.796256 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:01:23.796262 | orchestrator | Friday 09 January 2026 01:00:54 +0000 (0:00:07.438) 0:00:36.610 ******** 2026-01-09 01:01:23.796268 | orchestrator | =============================================================================== 2026-01-09 01:01:23.796274 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.89s 2026-01-09 01:01:23.796280 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.44s 2026-01-09 01:01:23.796286 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.25s 2026-01-09 01:01:23.796293 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.69s 2026-01-09 01:01:23.796301 | orchestrator | Check if target directories exist --------------------------------------- 4.13s 2026-01-09 01:01:23.796308 | orchestrator | Create share directory -------------------------------------------------- 1.05s 2026-01-09 01:01:23.796317 | orchestrator | 2026-01-09 01:01:23.796327 | orchestrator | 2026-01-09 01:01:23.796336 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:01:23.796346 | orchestrator | 2026-01-09 01:01:23.796356 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:01:23.796382 | orchestrator | Friday 09 January 2026 00:58:32 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-01-09 01:01:23.796402 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:01:23.796413 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:01:23.796423 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:01:23.796433 | orchestrator | 2026-01-09 01:01:23.796440 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:01:23.796447 | orchestrator | Friday 09 January 2026 00:58:32 +0000 (0:00:00.313) 0:00:00.580 ******** 2026-01-09 01:01:23.796454 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-09 01:01:23.796461 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-09 01:01:23.796467 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-09 01:01:23.796473 | orchestrator | 2026-01-09 01:01:23.796479 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-09 01:01:23.796491 | orchestrator | 2026-01-09 01:01:23.796497 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-09 01:01:23.796502 | orchestrator | Friday 09 January 2026 00:58:33 +0000 (0:00:00.434) 0:00:01.014 ******** 2026-01-09 01:01:23.796508 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:01:23.796515 | orchestrator | 2026-01-09 01:01:23.796525 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-09 01:01:23.796535 | orchestrator | Friday 09 January 2026 00:58:33 +0000 (0:00:00.551) 0:00:01.566 ******** 2026-01-09 01:01:23.796586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.796608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.796620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.796633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.796652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.796694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.796708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.796725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.796735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.796745 | orchestrator | 2026-01-09 01:01:23.796756 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-09 01:01:23.796763 | orchestrator | Friday 09 January 2026 00:58:35 +0000 (0:00:01.938) 0:00:03.504 ******** 2026-01-09 01:01:23.796769 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.796775 | orchestrator | 2026-01-09 01:01:23.796781 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-09 01:01:23.796787 | orchestrator | Friday 09 January 2026 00:58:35 +0000 (0:00:00.139) 0:00:03.644 ******** 2026-01-09 01:01:23.796799 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.796805 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.796810 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.796816 | orchestrator | 2026-01-09 01:01:23.796822 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-09 01:01:23.796828 | orchestrator | Friday 09 January 2026 00:58:36 +0000 (0:00:00.516) 0:00:04.161 ******** 2026-01-09 01:01:23.796834 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 01:01:23.796840 | orchestrator | 2026-01-09 01:01:23.796845 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-09 01:01:23.796851 | orchestrator | Friday 09 January 2026 00:58:37 +0000 (0:00:00.897) 0:00:05.059 ******** 2026-01-09 01:01:23.796857 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:01:23.796863 | orchestrator | 2026-01-09 01:01:23.796869 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-09 01:01:23.796875 | orchestrator | Friday 09 January 2026 00:58:37 +0000 (0:00:00.519) 0:00:05.578 ******** 2026-01-09 01:01:23.796901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.796913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.796938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.796956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.796963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.796969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.796981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.796991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.796997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.797003 | orchestrator | 2026-01-09 01:01:23.797014 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-09 01:01:23.797020 | orchestrator | Friday 09 January 2026 00:58:41 +0000 (0:00:03.575) 0:00:09.154 ******** 2026-01-09 01:01:23.797026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 01:01:23.797033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.797039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 01:01:23.797046 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.797063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 01:01:23.797072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.797093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 01:01:23.797103 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.797113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 01:01:23.797123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.797140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 01:01:23.797150 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.797159 | orchestrator | 2026-01-09 01:01:23.797167 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-09 01:01:23.797176 | orchestrator | Friday 09 January 2026 00:58:42 +0000 (0:00:00.847) 0:00:10.002 ******** 2026-01-09 01:01:23.797191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 01:01:23.797208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.797217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 01:01:23.797226 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.797235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 01:01:23.797252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh2026-01-09 01:01:23 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:23.797264 | orchestrator | 2026-01-09 01:01:23 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:23.797273 | orchestrator | 2026-01-09 01:01:23 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:23.797282 | orchestrator | 2026-01-09 01:01:23 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:23.797291 | orchestrator | 2026-01-09 01:01:23 | INFO  | Task 1d7eedb8-1ae8-48e5-b5d8-57d6a772de40 is in state SUCCESS 2026-01-09 01:01:23.797304 | orchestrator | :2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.797320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 01:01:23.797330 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.797340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 01:01:23.797350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.797359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 01:01:23.797368 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.797376 | orchestrator | 2026-01-09 01:01:23.797385 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-09 01:01:23.797399 | orchestrator | Friday 09 January 2026 00:58:43 +0000 (0:00:00.802) 0:00:10.805 ******** 2026-01-09 01:01:23.797414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.797431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.797442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.797451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.797466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.797476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.797495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.797504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.797514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.797523 | orchestrator | 2026-01-09 01:01:23.797532 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-09 01:01:23.797541 | orchestrator | Friday 09 January 2026 00:58:46 +0000 (0:00:02.964) 0:00:13.770 ******** 2026-01-09 01:01:23.797551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.797569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.797591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.797601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.797612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.797623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.797640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.797657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.797671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.797681 | orchestrator | 2026-01-09 01:01:23.797691 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-09 01:01:23.797701 | orchestrator | Friday 09 January 2026 00:58:51 +0000 (0:00:05.618) 0:00:19.389 ******** 2026-01-09 01:01:23.797712 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:01:23.797721 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:01:23.797731 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:01:23.797741 | orchestrator | 2026-01-09 01:01:23.797751 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-09 01:01:23.797761 | orchestrator | Friday 09 January 2026 00:58:53 +0000 (0:00:01.662) 0:00:21.051 ******** 2026-01-09 01:01:23.797771 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.797781 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.797792 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.797799 | orchestrator | 2026-01-09 01:01:23.797804 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-09 01:01:23.797811 | orchestrator | Friday 09 January 2026 00:58:53 +0000 (0:00:00.527) 0:00:21.579 ******** 2026-01-09 01:01:23.797816 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.797822 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.797828 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.797834 | orchestrator | 2026-01-09 01:01:23.797840 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-09 01:01:23.797845 | orchestrator | Friday 09 January 2026 00:58:54 +0000 (0:00:00.305) 0:00:21.884 ******** 2026-01-09 01:01:23.797851 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.797857 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.797863 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.797869 | orchestrator | 2026-01-09 01:01:23.797874 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-09 01:01:23.797880 | orchestrator | Friday 09 January 2026 00:58:54 +0000 (0:00:00.530) 0:00:22.415 ******** 2026-01-09 01:01:23.797887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 01:01:23.797904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.797911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 01:01:23.797917 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.797957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 01:01:23.797966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-09 01:01:23.797972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.797990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-09 01:01:23.798006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 01:01:23.798077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-09 01:01:23.798094 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.798102 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.798108 | orchestrator | 2026-01-09 01:01:23.798114 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-09 01:01:23.798120 | orchestrator | Friday 09 January 2026 00:58:55 +0000 (0:00:00.652) 0:00:23.067 ******** 2026-01-09 01:01:23.798126 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.798135 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.798145 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.798155 | orchestrator | 2026-01-09 01:01:23.798164 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-09 01:01:23.798174 | orchestrator | Friday 09 January 2026 00:58:55 +0000 (0:00:00.325) 0:00:23.393 ******** 2026-01-09 01:01:23.798184 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-09 01:01:23.798195 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-09 01:01:23.798205 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-09 01:01:23.798215 | orchestrator | 2026-01-09 01:01:23.798223 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-09 01:01:23.798233 | orchestrator | Friday 09 January 2026 00:58:57 +0000 (0:00:01.610) 0:00:25.003 ******** 2026-01-09 01:01:23.798243 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 01:01:23.798249 | orchestrator | 2026-01-09 01:01:23.798255 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-09 01:01:23.798261 | orchestrator | Friday 09 January 2026 00:58:58 +0000 (0:00:00.939) 0:00:25.943 ******** 2026-01-09 01:01:23.798267 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.798279 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.798285 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.798291 | orchestrator | 2026-01-09 01:01:23.798297 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-09 01:01:23.798303 | orchestrator | Friday 09 January 2026 00:58:59 +0000 (0:00:00.987) 0:00:26.930 ******** 2026-01-09 01:01:23.798308 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-09 01:01:23.798314 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 01:01:23.798320 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-09 01:01:23.798326 | orchestrator | 2026-01-09 01:01:23.798332 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-09 01:01:23.798337 | orchestrator | Friday 09 January 2026 00:59:00 +0000 (0:00:01.134) 0:00:28.064 ******** 2026-01-09 01:01:23.798343 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:01:23.798349 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:01:23.798355 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:01:23.798361 | orchestrator | 2026-01-09 01:01:23.798367 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-09 01:01:23.798373 | orchestrator | Friday 09 January 2026 00:59:00 +0000 (0:00:00.306) 0:00:28.370 ******** 2026-01-09 01:01:23.798379 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-09 01:01:23.798385 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-09 01:01:23.798391 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-09 01:01:23.798397 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-09 01:01:23.798402 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-09 01:01:23.798408 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-09 01:01:23.798414 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-09 01:01:23.798420 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-09 01:01:23.798426 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-09 01:01:23.798438 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-09 01:01:23.798444 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-09 01:01:23.798449 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-09 01:01:23.798455 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-09 01:01:23.798461 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-09 01:01:23.798467 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-09 01:01:23.798472 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-09 01:01:23.798478 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-09 01:01:23.798484 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-09 01:01:23.798501 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-09 01:01:23.798507 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-09 01:01:23.798513 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-09 01:01:23.798519 | orchestrator | 2026-01-09 01:01:23.798525 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-09 01:01:23.798534 | orchestrator | Friday 09 January 2026 00:59:09 +0000 (0:00:09.072) 0:00:37.443 ******** 2026-01-09 01:01:23.798540 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-09 01:01:23.798546 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-09 01:01:23.798551 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-09 01:01:23.798557 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-09 01:01:23.798563 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-09 01:01:23.798568 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-09 01:01:23.798574 | orchestrator | 2026-01-09 01:01:23.798580 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-09 01:01:23.798585 | orchestrator | Friday 09 January 2026 00:59:12 +0000 (0:00:02.922) 0:00:40.365 ******** 2026-01-09 01:01:23.798592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.798599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.798615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-09 01:01:23.798630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.798641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.798651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-09 01:01:23.798661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.798671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.798688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-09 01:01:23.798698 | orchestrator | 2026-01-09 01:01:23.798707 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-09 01:01:23.798724 | orchestrator | Friday 09 January 2026 00:59:15 +0000 (0:00:02.625) 0:00:42.990 ******** 2026-01-09 01:01:23.798733 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.798743 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.798753 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.798760 | orchestrator | 2026-01-09 01:01:23.798769 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-09 01:01:23.798774 | orchestrator | Friday 09 January 2026 00:59:15 +0000 (0:00:00.313) 0:00:43.304 ******** 2026-01-09 01:01:23.798780 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:01:23.798786 | orchestrator | 2026-01-09 01:01:23.798792 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-09 01:01:23.798798 | orchestrator | Friday 09 January 2026 00:59:17 +0000 (0:00:02.401) 0:00:45.706 ******** 2026-01-09 01:01:23.798804 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:01:23.798809 | orchestrator | 2026-01-09 01:01:23.798815 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-09 01:01:23.798821 | orchestrator | Friday 09 January 2026 00:59:20 +0000 (0:00:02.476) 0:00:48.183 ******** 2026-01-09 01:01:23.798827 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:01:23.798833 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:01:23.798838 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:01:23.798844 | orchestrator | 2026-01-09 01:01:23.798850 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-09 01:01:23.798856 | orchestrator | Friday 09 January 2026 00:59:21 +0000 (0:00:00.864) 0:00:49.047 ******** 2026-01-09 01:01:23.798862 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:01:23.798867 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:01:23.798873 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:01:23.798879 | orchestrator | 2026-01-09 01:01:23.798885 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-09 01:01:23.798890 | orchestrator | Friday 09 January 2026 00:59:21 +0000 (0:00:00.550) 0:00:49.597 ******** 2026-01-09 01:01:23.798896 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.798902 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.798908 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.798914 | orchestrator | 2026-01-09 01:01:23.798919 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-09 01:01:23.798944 | orchestrator | Friday 09 January 2026 00:59:22 +0000 (0:00:00.339) 0:00:49.937 ******** 2026-01-09 01:01:23.798950 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:01:23.798956 | orchestrator | 2026-01-09 01:01:23.798962 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-09 01:01:23.798968 | orchestrator | Friday 09 January 2026 00:59:38 +0000 (0:00:16.708) 0:01:06.645 ******** 2026-01-09 01:01:23.798974 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:01:23.798979 | orchestrator | 2026-01-09 01:01:23.798985 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-09 01:01:23.798991 | orchestrator | Friday 09 January 2026 00:59:51 +0000 (0:00:12.104) 0:01:18.750 ******** 2026-01-09 01:01:23.798997 | orchestrator | 2026-01-09 01:01:23.799003 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-09 01:01:23.799008 | orchestrator | Friday 09 January 2026 00:59:51 +0000 (0:00:00.065) 0:01:18.816 ******** 2026-01-09 01:01:23.799014 | orchestrator | 2026-01-09 01:01:23.799020 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-09 01:01:23.799026 | orchestrator | Friday 09 January 2026 00:59:51 +0000 (0:00:00.064) 0:01:18.880 ******** 2026-01-09 01:01:23.799031 | orchestrator | 2026-01-09 01:01:23.799037 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-09 01:01:23.799043 | orchestrator | Friday 09 January 2026 00:59:51 +0000 (0:00:00.069) 0:01:18.950 ******** 2026-01-09 01:01:23.799049 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:01:23.799055 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:01:23.799065 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:01:23.799070 | orchestrator | 2026-01-09 01:01:23.799076 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-09 01:01:23.799082 | orchestrator | Friday 09 January 2026 01:00:11 +0000 (0:00:20.571) 0:01:39.521 ******** 2026-01-09 01:01:23.799088 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:01:23.799094 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:01:23.799099 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:01:23.799105 | orchestrator | 2026-01-09 01:01:23.799111 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-09 01:01:23.799117 | orchestrator | Friday 09 January 2026 01:00:16 +0000 (0:00:04.733) 0:01:44.255 ******** 2026-01-09 01:01:23.799123 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:01:23.799128 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:01:23.799134 | orchestrator | [0;33mchanged: [testbed-node-2] 2026-01-09 01:01:23.799140 | orchestrator | 2026-01-09 01:01:23.799146 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-09 01:01:23.799151 | orchestrator | Friday 09 January 2026 01:00:23 +0000 (0:00:06.631) 0:01:50.887 ******** 2026-01-09 01:01:23.799157 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:01:23.799163 | orchestrator | 2026-01-09 01:01:23.799173 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-09 01:01:23.799179 | orchestrator | Friday 09 January 2026 01:00:23 +0000 (0:00:00.722) 0:01:51.609 ******** 2026-01-09 01:01:23.799185 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:01:23.799191 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:01:23.799196 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:01:23.799202 | orchestrator | 2026-01-09 01:01:23.799208 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-09 01:01:23.799214 | orchestrator | Friday 09 January 2026 01:00:24 +0000 (0:00:00.824) 0:01:52.433 ******** 2026-01-09 01:01:23.799220 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:01:23.799225 | orchestrator | 2026-01-09 01:01:23.799235 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-09 01:01:23.799244 | orchestrator | Friday 09 January 2026 01:00:26 +0000 (0:00:01.736) 0:01:54.170 ******** 2026-01-09 01:01:23.799252 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-09 01:01:23.799262 | orchestrator | 2026-01-09 01:01:23.799271 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-01-09 01:01:23.799280 | orchestrator | Friday 09 January 2026 01:00:39 +0000 (0:00:13.468) 0:02:07.639 ******** 2026-01-09 01:01:23.799289 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-09 01:01:23.799298 | orchestrator | 2026-01-09 01:01:23.799312 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-01-09 01:01:23.799321 | orchestrator | Friday 09 January 2026 01:01:09 +0000 (0:00:29.466) 0:02:37.106 ******** 2026-01-09 01:01:23.799331 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-09 01:01:23.799341 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-09 01:01:23.799350 | orchestrator | 2026-01-09 01:01:23.799360 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-09 01:01:23.799370 | orchestrator | Friday 09 January 2026 01:01:16 +0000 (0:00:07.534) 0:02:44.641 ******** 2026-01-09 01:01:23.799380 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.799389 | orchestrator | 2026-01-09 01:01:23.799399 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-09 01:01:23.799409 | orchestrator | Friday 09 January 2026 01:01:17 +0000 (0:00:00.121) 0:02:44.762 ******** 2026-01-09 01:01:23.799416 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.799422 | orchestrator | 2026-01-09 01:01:23.799428 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-09 01:01:23.799434 | orchestrator | Friday 09 January 2026 01:01:17 +0000 (0:00:00.124) 0:02:44.887 ******** 2026-01-09 01:01:23.799445 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.799451 | orchestrator | 2026-01-09 01:01:23.799457 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-01-09 01:01:23.799463 | orchestrator | Friday 09 January 2026 01:01:17 +0000 (0:00:00.132) 0:02:45.020 ******** 2026-01-09 01:01:23.799469 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.799474 | orchestrator | 2026-01-09 01:01:23.799480 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-09 01:01:23.799486 | orchestrator | Friday 09 January 2026 01:01:17 +0000 (0:00:00.511) 0:02:45.531 ******** 2026-01-09 01:01:23.799492 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:01:23.799498 | orchestrator | 2026-01-09 01:01:23.799503 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-09 01:01:23.799509 | orchestrator | Friday 09 January 2026 01:01:21 +0000 (0:00:03.460) 0:02:48.991 ******** 2026-01-09 01:01:23.799515 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:01:23.799521 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:01:23.799527 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:01:23.799532 | orchestrator | 2026-01-09 01:01:23.799538 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:01:23.799545 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-09 01:01:23.799552 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-09 01:01:23.799558 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-09 01:01:23.799564 | orchestrator | 2026-01-09 01:01:23.799569 | orchestrator | 2026-01-09 01:01:23.799575 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:01:23.799581 | orchestrator | Friday 09 January 2026 01:01:21 +0000 (0:00:00.446) 0:02:49.437 ******** 2026-01-09 01:01:23.799587 | orchestrator | =============================================================================== 2026-01-09 01:01:23.799592 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.47s 2026-01-09 01:01:23.799598 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 20.57s 2026-01-09 01:01:23.799604 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.71s 2026-01-09 01:01:23.799610 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.47s 2026-01-09 01:01:23.799615 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.10s 2026-01-09 01:01:23.799621 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.07s 2026-01-09 01:01:23.799627 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.53s 2026-01-09 01:01:23.799632 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.63s 2026-01-09 01:01:23.799638 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.62s 2026-01-09 01:01:23.799647 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.73s 2026-01-09 01:01:23.799664 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.58s 2026-01-09 01:01:23.799673 | orchestrator | keystone : Creating default user role ----------------------------------- 3.46s 2026-01-09 01:01:23.799683 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.96s 2026-01-09 01:01:23.799694 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.92s 2026-01-09 01:01:23.799704 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.63s 2026-01-09 01:01:23.799713 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.48s 2026-01-09 01:01:23.799724 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.40s 2026-01-09 01:01:23.799739 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.94s 2026-01-09 01:01:23.799749 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.74s 2026-01-09 01:01:23.799759 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.66s 2026-01-09 01:01:23.799774 | orchestrator | 2026-01-09 01:01:23 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:23.799781 | orchestrator | 2026-01-09 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:26.803531 | orchestrator | 2026-01-09 01:01:26 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:26.804113 | orchestrator | 2026-01-09 01:01:26 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:26.805375 | orchestrator | 2026-01-09 01:01:26 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:26.806277 | orchestrator | 2026-01-09 01:01:26 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:26.807273 | orchestrator | 2026-01-09 01:01:26 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:26.807428 | orchestrator | 2026-01-09 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:29.840673 | orchestrator | 2026-01-09 01:01:29 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:29.841823 | orchestrator | 2026-01-09 01:01:29 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:29.842694 | orchestrator | 2026-01-09 01:01:29 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:29.843288 | orchestrator | 2026-01-09 01:01:29 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:29.844242 | orchestrator | 2026-01-09 01:01:29 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:29.844285 | orchestrator | 2026-01-09 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:32.878182 | orchestrator | 2026-01-09 01:01:32 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:32.878271 | orchestrator | 2026-01-09 01:01:32 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:32.880275 | orchestrator | 2026-01-09 01:01:32 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:32.882513 | orchestrator | 2026-01-09 01:01:32 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:32.883839 | orchestrator | 2026-01-09 01:01:32 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:32.883900 | orchestrator | 2026-01-09 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:35.934287 | orchestrator | 2026-01-09 01:01:35 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:35.939385 | orchestrator | 2026-01-09 01:01:35 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:35.942576 | orchestrator | 2026-01-09 01:01:35 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:35.943555 | orchestrator | 2026-01-09 01:01:35 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:35.944633 | orchestrator | 2026-01-09 01:01:35 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:35.944810 | orchestrator | 2026-01-09 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:38.986839 | orchestrator | 2026-01-09 01:01:38 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:38.990677 | orchestrator | 2026-01-09 01:01:38 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:38.992201 | orchestrator | 2026-01-09 01:01:38 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:38.996901 | orchestrator | 2026-01-09 01:01:38 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:38.997647 | orchestrator | 2026-01-09 01:01:38 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:38.997716 | orchestrator | 2026-01-09 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:42.047018 | orchestrator | 2026-01-09 01:01:42 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:42.049461 | orchestrator | 2026-01-09 01:01:42 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:42.051999 | orchestrator | 2026-01-09 01:01:42 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:42.054913 | orchestrator | 2026-01-09 01:01:42 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:42.058678 | orchestrator | 2026-01-09 01:01:42 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:42.058754 | orchestrator | 2026-01-09 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:45.101317 | orchestrator | 2026-01-09 01:01:45 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:45.102351 | orchestrator | 2026-01-09 01:01:45 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:45.103429 | orchestrator | 2026-01-09 01:01:45 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:45.104669 | orchestrator | 2026-01-09 01:01:45 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:45.105755 | orchestrator | 2026-01-09 01:01:45 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:45.106079 | orchestrator | 2026-01-09 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:48.157081 | orchestrator | 2026-01-09 01:01:48 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:48.160492 | orchestrator | 2026-01-09 01:01:48 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:48.163172 | orchestrator | 2026-01-09 01:01:48 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:48.165665 | orchestrator | 2026-01-09 01:01:48 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:48.167225 | orchestrator | 2026-01-09 01:01:48 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:48.167256 | orchestrator | 2026-01-09 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:51.219147 | orchestrator | 2026-01-09 01:01:51 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:51.220686 | orchestrator | 2026-01-09 01:01:51 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:51.222941 | orchestrator | 2026-01-09 01:01:51 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:51.224202 | orchestrator | 2026-01-09 01:01:51 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:51.225489 | orchestrator | 2026-01-09 01:01:51 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:51.225523 | orchestrator | 2026-01-09 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:54.268301 | orchestrator | 2026-01-09 01:01:54 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:54.269048 | orchestrator | 2026-01-09 01:01:54 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:54.271281 | orchestrator | 2026-01-09 01:01:54 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state STARTED 2026-01-09 01:01:54.274282 | orchestrator | 2026-01-09 01:01:54 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:54.275427 | orchestrator | 2026-01-09 01:01:54 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:54.275466 | orchestrator | 2026-01-09 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:01:57.320980 | orchestrator | 2026-01-09 01:01:57 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:01:57.323120 | orchestrator | 2026-01-09 01:01:57 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:01:57.324106 | orchestrator | 2026-01-09 01:01:57 | INFO  | Task b8fff908-5db1-4d19-9c94-f24f746432d6 is in state SUCCESS 2026-01-09 01:01:57.327117 | orchestrator | 2026-01-09 01:01:57 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:01:57.328114 | orchestrator | 2026-01-09 01:01:57 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:01:57.329183 | orchestrator | 2026-01-09 01:01:57 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:01:57.329229 | orchestrator | 2026-01-09 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:00.368697 | orchestrator | 2026-01-09 01:02:00 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:00.370667 | orchestrator | 2026-01-09 01:02:00 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:00.371854 | orchestrator | 2026-01-09 01:02:00 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:00.374784 | orchestrator | 2026-01-09 01:02:00 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:00.376535 | orchestrator | 2026-01-09 01:02:00 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:00.376588 | orchestrator | 2026-01-09 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:03.421764 | orchestrator | 2026-01-09 01:02:03 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:03.422757 | orchestrator | 2026-01-09 01:02:03 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:03.424588 | orchestrator | 2026-01-09 01:02:03 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:03.427584 | orchestrator | 2026-01-09 01:02:03 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:03.428642 | orchestrator | 2026-01-09 01:02:03 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:03.428686 | orchestrator | 2026-01-09 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:06.468943 | orchestrator | 2026-01-09 01:02:06 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:06.469255 | orchestrator | 2026-01-09 01:02:06 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:06.470650 | orchestrator | 2026-01-09 01:02:06 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:06.472514 | orchestrator | 2026-01-09 01:02:06 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:06.475481 | orchestrator | 2026-01-09 01:02:06 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:06.475540 | orchestrator | 2026-01-09 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:09.514908 | orchestrator | 2026-01-09 01:02:09 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:09.517818 | orchestrator | 2026-01-09 01:02:09 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:09.520474 | orchestrator | 2026-01-09 01:02:09 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:09.521813 | orchestrator | 2026-01-09 01:02:09 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:09.522258 | orchestrator | 2026-01-09 01:02:09 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:09.522481 | orchestrator | 2026-01-09 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:12.567656 | orchestrator | 2026-01-09 01:02:12 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:12.571449 | orchestrator | 2026-01-09 01:02:12 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:12.572619 | orchestrator | 2026-01-09 01:02:12 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:12.573741 | orchestrator | 2026-01-09 01:02:12 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:12.577309 | orchestrator | 2026-01-09 01:02:12 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:12.577364 | orchestrator | 2026-01-09 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:15.601888 | orchestrator | 2026-01-09 01:02:15 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:15.602296 | orchestrator | 2026-01-09 01:02:15 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:15.603013 | orchestrator | 2026-01-09 01:02:15 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:15.603787 | orchestrator | 2026-01-09 01:02:15 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:15.604446 | orchestrator | 2026-01-09 01:02:15 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:15.604642 | orchestrator | 2026-01-09 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:18.637512 | orchestrator | 2026-01-09 01:02:18 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:18.639319 | orchestrator | 2026-01-09 01:02:18 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:18.641817 | orchestrator | 2026-01-09 01:02:18 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:18.643881 | orchestrator | 2026-01-09 01:02:18 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:18.645512 | orchestrator | 2026-01-09 01:02:18 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:18.645819 | orchestrator | 2026-01-09 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:21.682420 | orchestrator | 2026-01-09 01:02:21 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:21.683751 | orchestrator | 2026-01-09 01:02:21 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:21.685467 | orchestrator | 2026-01-09 01:02:21 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:21.686565 | orchestrator | 2026-01-09 01:02:21 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:21.688101 | orchestrator | 2026-01-09 01:02:21 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:21.688176 | orchestrator | 2026-01-09 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:24.723908 | orchestrator | 2026-01-09 01:02:24 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:24.724567 | orchestrator | 2026-01-09 01:02:24 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:24.725392 | orchestrator | 2026-01-09 01:02:24 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:24.727503 | orchestrator | 2026-01-09 01:02:24 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:24.727851 | orchestrator | 2026-01-09 01:02:24 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:24.728027 | orchestrator | 2026-01-09 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:27.762626 | orchestrator | 2026-01-09 01:02:27 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:27.762895 | orchestrator | 2026-01-09 01:02:27 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:27.764009 | orchestrator | 2026-01-09 01:02:27 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:27.764400 | orchestrator | 2026-01-09 01:02:27 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:27.765241 | orchestrator | 2026-01-09 01:02:27 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:27.765290 | orchestrator | 2026-01-09 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:30.796433 | orchestrator | 2026-01-09 01:02:30 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:30.796756 | orchestrator | 2026-01-09 01:02:30 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:30.797564 | orchestrator | 2026-01-09 01:02:30 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:30.798104 | orchestrator | 2026-01-09 01:02:30 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:30.799171 | orchestrator | 2026-01-09 01:02:30 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:30.799212 | orchestrator | 2026-01-09 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:33.827683 | orchestrator | 2026-01-09 01:02:33 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:33.828006 | orchestrator | 2026-01-09 01:02:33 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:33.828739 | orchestrator | 2026-01-09 01:02:33 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:33.829490 | orchestrator | 2026-01-09 01:02:33 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:33.830138 | orchestrator | 2026-01-09 01:02:33 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:33.830157 | orchestrator | 2026-01-09 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:36.852994 | orchestrator | 2026-01-09 01:02:36 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:36.855376 | orchestrator | 2026-01-09 01:02:36 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:36.855677 | orchestrator | 2026-01-09 01:02:36 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:36.856312 | orchestrator | 2026-01-09 01:02:36 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:36.856935 | orchestrator | 2026-01-09 01:02:36 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:36.857122 | orchestrator | 2026-01-09 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:39.879210 | orchestrator | 2026-01-09 01:02:39 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:39.879415 | orchestrator | 2026-01-09 01:02:39 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:39.879980 | orchestrator | 2026-01-09 01:02:39 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:39.880512 | orchestrator | 2026-01-09 01:02:39 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:39.881354 | orchestrator | 2026-01-09 01:02:39 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:39.881390 | orchestrator | 2026-01-09 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:42.917871 | orchestrator | 2026-01-09 01:02:42 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:42.918127 | orchestrator | 2026-01-09 01:02:42 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:42.919931 | orchestrator | 2026-01-09 01:02:42 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:42.920900 | orchestrator | 2026-01-09 01:02:42 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:42.923501 | orchestrator | 2026-01-09 01:02:42 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:42.923578 | orchestrator | 2026-01-09 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:45.961506 | orchestrator | 2026-01-09 01:02:45 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:45.961752 | orchestrator | 2026-01-09 01:02:45 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state STARTED 2026-01-09 01:02:45.963505 | orchestrator | 2026-01-09 01:02:45 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:45.964269 | orchestrator | 2026-01-09 01:02:45 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:45.965504 | orchestrator | 2026-01-09 01:02:45 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:45.965567 | orchestrator | 2026-01-09 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:49.002946 | orchestrator | 2026-01-09 01:02:49 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:49.003058 | orchestrator | 2026-01-09 01:02:49 | INFO  | Task b9354468-c8e8-4fb2-8c6b-a271eddf9d12 is in state SUCCESS 2026-01-09 01:02:49.003465 | orchestrator | 2026-01-09 01:02:49 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:49.003651 | orchestrator | 2026-01-09 01:02:49 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:49.006593 | orchestrator | 2026-01-09 01:02:49 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:49.006634 | orchestrator | 2026-01-09 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:52.028698 | orchestrator | 2026-01-09 01:02:52 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:52.028813 | orchestrator | 2026-01-09 01:02:52 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:02:52.029534 | orchestrator | 2026-01-09 01:02:52 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:52.030295 | orchestrator | 2026-01-09 01:02:52 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:52.030772 | orchestrator | 2026-01-09 01:02:52 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:52.030795 | orchestrator | 2026-01-09 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:55.078504 | orchestrator | 2026-01-09 01:02:55 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:55.079246 | orchestrator | 2026-01-09 01:02:55 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:02:55.080218 | orchestrator | 2026-01-09 01:02:55 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:55.081272 | orchestrator | 2026-01-09 01:02:55 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:55.082339 | orchestrator | 2026-01-09 01:02:55 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:55.082365 | orchestrator | 2026-01-09 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:02:58.123439 | orchestrator | 2026-01-09 01:02:58 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:02:58.123507 | orchestrator | 2026-01-09 01:02:58 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:02:58.123762 | orchestrator | 2026-01-09 01:02:58 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:02:58.124551 | orchestrator | 2026-01-09 01:02:58 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:02:58.125417 | orchestrator | 2026-01-09 01:02:58 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:02:58.125441 | orchestrator | 2026-01-09 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:01.152469 | orchestrator | 2026-01-09 01:03:01 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:01.152635 | orchestrator | 2026-01-09 01:03:01 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:01.153490 | orchestrator | 2026-01-09 01:03:01 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:01.154176 | orchestrator | 2026-01-09 01:03:01 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:01.154932 | orchestrator | 2026-01-09 01:03:01 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:01.154985 | orchestrator | 2026-01-09 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:04.181061 | orchestrator | 2026-01-09 01:03:04 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:04.183937 | orchestrator | 2026-01-09 01:03:04 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:04.184006 | orchestrator | 2026-01-09 01:03:04 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:04.184011 | orchestrator | 2026-01-09 01:03:04 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:04.184348 | orchestrator | 2026-01-09 01:03:04 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:04.184371 | orchestrator | 2026-01-09 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:07.245238 | orchestrator | 2026-01-09 01:03:07 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:07.245284 | orchestrator | 2026-01-09 01:03:07 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:07.245292 | orchestrator | 2026-01-09 01:03:07 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:07.249817 | orchestrator | 2026-01-09 01:03:07 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:07.249874 | orchestrator | 2026-01-09 01:03:07 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:07.249882 | orchestrator | 2026-01-09 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:10.276744 | orchestrator | 2026-01-09 01:03:10 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:10.277303 | orchestrator | 2026-01-09 01:03:10 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:10.278295 | orchestrator | 2026-01-09 01:03:10 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:10.279694 | orchestrator | 2026-01-09 01:03:10 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:10.280663 | orchestrator | 2026-01-09 01:03:10 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:10.280695 | orchestrator | 2026-01-09 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:13.308766 | orchestrator | 2026-01-09 01:03:13 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:13.309192 | orchestrator | 2026-01-09 01:03:13 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:13.309949 | orchestrator | 2026-01-09 01:03:13 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:13.310910 | orchestrator | 2026-01-09 01:03:13 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:13.312390 | orchestrator | 2026-01-09 01:03:13 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:13.312423 | orchestrator | 2026-01-09 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:16.349626 | orchestrator | 2026-01-09 01:03:16 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:16.349763 | orchestrator | 2026-01-09 01:03:16 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:16.350559 | orchestrator | 2026-01-09 01:03:16 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:16.351328 | orchestrator | 2026-01-09 01:03:16 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:16.352241 | orchestrator | 2026-01-09 01:03:16 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:16.352283 | orchestrator | 2026-01-09 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:19.378743 | orchestrator | 2026-01-09 01:03:19 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:19.379158 | orchestrator | 2026-01-09 01:03:19 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:19.379894 | orchestrator | 2026-01-09 01:03:19 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:19.382709 | orchestrator | 2026-01-09 01:03:19 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:19.383344 | orchestrator | 2026-01-09 01:03:19 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:19.383396 | orchestrator | 2026-01-09 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:22.408372 | orchestrator | 2026-01-09 01:03:22 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:22.409010 | orchestrator | 2026-01-09 01:03:22 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:22.409565 | orchestrator | 2026-01-09 01:03:22 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:22.410536 | orchestrator | 2026-01-09 01:03:22 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:22.411365 | orchestrator | 2026-01-09 01:03:22 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:22.411539 | orchestrator | 2026-01-09 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:25.441668 | orchestrator | 2026-01-09 01:03:25 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:25.442309 | orchestrator | 2026-01-09 01:03:25 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:25.445366 | orchestrator | 2026-01-09 01:03:25 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:25.446162 | orchestrator | 2026-01-09 01:03:25 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:25.446853 | orchestrator | 2026-01-09 01:03:25 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:25.446920 | orchestrator | 2026-01-09 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:28.486590 | orchestrator | 2026-01-09 01:03:28 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:28.487010 | orchestrator | 2026-01-09 01:03:28 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:28.488703 | orchestrator | 2026-01-09 01:03:28 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:28.489432 | orchestrator | 2026-01-09 01:03:28 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:28.490095 | orchestrator | 2026-01-09 01:03:28 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:28.490166 | orchestrator | 2026-01-09 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:31.525312 | orchestrator | 2026-01-09 01:03:31 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:31.525903 | orchestrator | 2026-01-09 01:03:31 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:31.526771 | orchestrator | 2026-01-09 01:03:31 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:31.527766 | orchestrator | 2026-01-09 01:03:31 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:31.528769 | orchestrator | 2026-01-09 01:03:31 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:31.528819 | orchestrator | 2026-01-09 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:34.567516 | orchestrator | 2026-01-09 01:03:34 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:34.567705 | orchestrator | 2026-01-09 01:03:34 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:34.568717 | orchestrator | 2026-01-09 01:03:34 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:34.570578 | orchestrator | 2026-01-09 01:03:34 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:34.571440 | orchestrator | 2026-01-09 01:03:34 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:34.571479 | orchestrator | 2026-01-09 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:37.599420 | orchestrator | 2026-01-09 01:03:37 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:37.599899 | orchestrator | 2026-01-09 01:03:37 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:37.600390 | orchestrator | 2026-01-09 01:03:37 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:37.601050 | orchestrator | 2026-01-09 01:03:37 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state STARTED 2026-01-09 01:03:37.601886 | orchestrator | 2026-01-09 01:03:37 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:37.601923 | orchestrator | 2026-01-09 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:40.630728 | orchestrator | 2026-01-09 01:03:40 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:40.631229 | orchestrator | 2026-01-09 01:03:40 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:40.631996 | orchestrator | 2026-01-09 01:03:40 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:40.632783 | orchestrator | 2026-01-09 01:03:40 | INFO  | Task 15c0e14d-fe08-42dd-9d34-c9dbdfc98cee is in state SUCCESS 2026-01-09 01:03:40.633842 | orchestrator | 2026-01-09 01:03:40.633873 | orchestrator | 2026-01-09 01:03:40.633882 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-09 01:03:40.633887 | orchestrator | 2026-01-09 01:03:40.633892 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-09 01:03:40.633897 | orchestrator | Friday 09 January 2026 01:00:59 +0000 (0:00:00.244) 0:00:00.244 ******** 2026-01-09 01:03:40.633901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-09 01:03:40.633907 | orchestrator | 2026-01-09 01:03:40.633911 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-09 01:03:40.633916 | orchestrator | Friday 09 January 2026 01:00:59 +0000 (0:00:00.221) 0:00:00.465 ******** 2026-01-09 01:03:40.633921 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-09 01:03:40.633926 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-09 01:03:40.633930 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-09 01:03:40.633934 | orchestrator | 2026-01-09 01:03:40.633938 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-09 01:03:40.633961 | orchestrator | Friday 09 January 2026 01:01:00 +0000 (0:00:01.288) 0:00:01.754 ******** 2026-01-09 01:03:40.633965 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-09 01:03:40.633969 | orchestrator | 2026-01-09 01:03:40.633975 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-09 01:03:40.633981 | orchestrator | Friday 09 January 2026 01:01:02 +0000 (0:00:01.574) 0:00:03.328 ******** 2026-01-09 01:03:40.633988 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.633994 | orchestrator | 2026-01-09 01:03:40.633999 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-09 01:03:40.634005 | orchestrator | Friday 09 January 2026 01:01:03 +0000 (0:00:00.978) 0:00:04.307 ******** 2026-01-09 01:03:40.634062 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634071 | orchestrator | 2026-01-09 01:03:40.634077 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-09 01:03:40.634084 | orchestrator | Friday 09 January 2026 01:01:04 +0000 (0:00:00.955) 0:00:05.262 ******** 2026-01-09 01:03:40.634090 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-09 01:03:40.634095 | orchestrator | ok: [testbed-manager] 2026-01-09 01:03:40.634101 | orchestrator | 2026-01-09 01:03:40.634107 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-09 01:03:40.634114 | orchestrator | Friday 09 January 2026 01:01:45 +0000 (0:00:41.096) 0:00:46.359 ******** 2026-01-09 01:03:40.634120 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-09 01:03:40.634127 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-09 01:03:40.634133 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-09 01:03:40.634140 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-09 01:03:40.634160 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-09 01:03:40.634165 | orchestrator | 2026-01-09 01:03:40.634169 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-09 01:03:40.634172 | orchestrator | Friday 09 January 2026 01:01:49 +0000 (0:00:04.329) 0:00:50.689 ******** 2026-01-09 01:03:40.634176 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-09 01:03:40.634180 | orchestrator | 2026-01-09 01:03:40.634184 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-09 01:03:40.634188 | orchestrator | Friday 09 January 2026 01:01:50 +0000 (0:00:00.497) 0:00:51.186 ******** 2026-01-09 01:03:40.634192 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:03:40.634196 | orchestrator | 2026-01-09 01:03:40.634200 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-09 01:03:40.634204 | orchestrator | Friday 09 January 2026 01:01:50 +0000 (0:00:00.161) 0:00:51.348 ******** 2026-01-09 01:03:40.634207 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:03:40.634211 | orchestrator | 2026-01-09 01:03:40.634215 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-09 01:03:40.634219 | orchestrator | Friday 09 January 2026 01:01:50 +0000 (0:00:00.520) 0:00:51.868 ******** 2026-01-09 01:03:40.634222 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634226 | orchestrator | 2026-01-09 01:03:40.634230 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-09 01:03:40.634234 | orchestrator | Friday 09 January 2026 01:01:52 +0000 (0:00:01.428) 0:00:53.297 ******** 2026-01-09 01:03:40.634238 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634242 | orchestrator | 2026-01-09 01:03:40.634245 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-09 01:03:40.634249 | orchestrator | Friday 09 January 2026 01:01:52 +0000 (0:00:00.771) 0:00:54.069 ******** 2026-01-09 01:03:40.634253 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634257 | orchestrator | 2026-01-09 01:03:40.634261 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-09 01:03:40.634272 | orchestrator | Friday 09 January 2026 01:01:53 +0000 (0:00:00.620) 0:00:54.689 ******** 2026-01-09 01:03:40.634276 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-09 01:03:40.634280 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-09 01:03:40.634314 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-09 01:03:40.634320 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-09 01:03:40.634326 | orchestrator | 2026-01-09 01:03:40.634333 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:03:40.634340 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:03:40.634349 | orchestrator | 2026-01-09 01:03:40.634353 | orchestrator | 2026-01-09 01:03:40.634367 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:03:40.634371 | orchestrator | Friday 09 January 2026 01:01:55 +0000 (0:00:01.508) 0:00:56.198 ******** 2026-01-09 01:03:40.634375 | orchestrator | =============================================================================== 2026-01-09 01:03:40.634379 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.10s 2026-01-09 01:03:40.634383 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.33s 2026-01-09 01:03:40.634387 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.57s 2026-01-09 01:03:40.634391 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.51s 2026-01-09 01:03:40.634394 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.43s 2026-01-09 01:03:40.634398 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.29s 2026-01-09 01:03:40.634402 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.98s 2026-01-09 01:03:40.634406 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2026-01-09 01:03:40.634410 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2026-01-09 01:03:40.634413 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-01-09 01:03:40.634417 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.52s 2026-01-09 01:03:40.634421 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-01-09 01:03:40.634425 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-01-09 01:03:40.634428 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-01-09 01:03:40.634432 | orchestrator | 2026-01-09 01:03:40.634436 | orchestrator | 2026-01-09 01:03:40.634440 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-09 01:03:40.634444 | orchestrator | 2026-01-09 01:03:40.634451 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-09 01:03:40.634456 | orchestrator | Friday 09 January 2026 01:01:28 +0000 (0:00:00.086) 0:00:00.086 ******** 2026-01-09 01:03:40.634461 | orchestrator | changed: [localhost] 2026-01-09 01:03:40.634466 | orchestrator | 2026-01-09 01:03:40.634471 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-09 01:03:40.634475 | orchestrator | Friday 09 January 2026 01:01:29 +0000 (0:00:00.730) 0:00:00.817 ******** 2026-01-09 01:03:40.634480 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-01-09 01:03:40.634485 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-01-09 01:03:40.634489 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-01-09 01:03:40.634514 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.initramfs", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.initramfs"} 2026-01-09 01:03:40.634533 | orchestrator | 2026-01-09 01:03:40.634538 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:03:40.634543 | orchestrator | localhost : ok=1  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-09 01:03:40.634547 | orchestrator | 2026-01-09 01:03:40.634553 | orchestrator | 2026-01-09 01:03:40.634559 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:03:40.634565 | orchestrator | Friday 09 January 2026 01:02:48 +0000 (0:01:19.180) 0:01:19.998 ******** 2026-01-09 01:03:40.634571 | orchestrator | =============================================================================== 2026-01-09 01:03:40.634578 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 79.18s 2026-01-09 01:03:40.634584 | orchestrator | Ensure the destination directory exists --------------------------------- 0.73s 2026-01-09 01:03:40.634590 | orchestrator | 2026-01-09 01:03:40.634596 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-09 01:03:40.634602 | orchestrator | 2.16.14 2026-01-09 01:03:40.634608 | orchestrator | 2026-01-09 01:03:40.634614 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-09 01:03:40.634619 | orchestrator | 2026-01-09 01:03:40.634626 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-09 01:03:40.634632 | orchestrator | Friday 09 January 2026 01:01:59 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-01-09 01:03:40.634638 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634645 | orchestrator | 2026-01-09 01:03:40.634651 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-09 01:03:40.634658 | orchestrator | Friday 09 January 2026 01:02:01 +0000 (0:00:01.597) 0:00:01.874 ******** 2026-01-09 01:03:40.634666 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634672 | orchestrator | 2026-01-09 01:03:40.634679 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-09 01:03:40.634685 | orchestrator | Friday 09 January 2026 01:02:02 +0000 (0:00:01.059) 0:00:02.933 ******** 2026-01-09 01:03:40.634692 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634699 | orchestrator | 2026-01-09 01:03:40.634706 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-09 01:03:40.634712 | orchestrator | Friday 09 January 2026 01:02:03 +0000 (0:00:01.076) 0:00:04.009 ******** 2026-01-09 01:03:40.634719 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634726 | orchestrator | 2026-01-09 01:03:40.634731 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-09 01:03:40.634741 | orchestrator | Friday 09 January 2026 01:02:04 +0000 (0:00:01.262) 0:00:05.271 ******** 2026-01-09 01:03:40.634746 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634750 | orchestrator | 2026-01-09 01:03:40.634755 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-09 01:03:40.634759 | orchestrator | Friday 09 January 2026 01:02:06 +0000 (0:00:01.208) 0:00:06.480 ******** 2026-01-09 01:03:40.634764 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634768 | orchestrator | 2026-01-09 01:03:40.634773 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-09 01:03:40.634778 | orchestrator | Friday 09 January 2026 01:02:07 +0000 (0:00:01.235) 0:00:07.716 ******** 2026-01-09 01:03:40.634783 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634787 | orchestrator | 2026-01-09 01:03:40.634791 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-09 01:03:40.634796 | orchestrator | Friday 09 January 2026 01:02:09 +0000 (0:00:02.058) 0:00:09.774 ******** 2026-01-09 01:03:40.634801 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634805 | orchestrator | 2026-01-09 01:03:40.634810 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-09 01:03:40.634814 | orchestrator | Friday 09 January 2026 01:02:10 +0000 (0:00:01.410) 0:00:11.185 ******** 2026-01-09 01:03:40.634824 | orchestrator | changed: [testbed-manager] 2026-01-09 01:03:40.634828 | orchestrator | 2026-01-09 01:03:40.634833 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-09 01:03:40.634837 | orchestrator | Friday 09 January 2026 01:03:13 +0000 (0:01:02.505) 0:01:13.691 ******** 2026-01-09 01:03:40.634841 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:03:40.634845 | orchestrator | 2026-01-09 01:03:40.634848 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-09 01:03:40.634852 | orchestrator | 2026-01-09 01:03:40.634856 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-09 01:03:40.634860 | orchestrator | Friday 09 January 2026 01:03:13 +0000 (0:00:00.140) 0:01:13.831 ******** 2026-01-09 01:03:40.634864 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:03:40.634868 | orchestrator | 2026-01-09 01:03:40.634876 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-09 01:03:40.634880 | orchestrator | 2026-01-09 01:03:40.634883 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-09 01:03:40.634887 | orchestrator | Friday 09 January 2026 01:03:15 +0000 (0:00:01.979) 0:01:15.811 ******** 2026-01-09 01:03:40.634891 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:03:40.634895 | orchestrator | 2026-01-09 01:03:40.634899 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-09 01:03:40.634903 | orchestrator | 2026-01-09 01:03:40.634907 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-09 01:03:40.634911 | orchestrator | Friday 09 January 2026 01:03:26 +0000 (0:00:11.135) 0:01:26.947 ******** 2026-01-09 01:03:40.634915 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:03:40.634918 | orchestrator | 2026-01-09 01:03:40.634922 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:03:40.634926 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-09 01:03:40.634931 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:03:40.634935 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:03:40.634939 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:03:40.634943 | orchestrator | 2026-01-09 01:03:40.634947 | orchestrator | 2026-01-09 01:03:40.634950 | orchestrator | 2026-01-09 01:03:40.634954 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:03:40.634958 | orchestrator | Friday 09 January 2026 01:03:37 +0000 (0:00:11.100) 0:01:38.047 ******** 2026-01-09 01:03:40.634962 | orchestrator | =============================================================================== 2026-01-09 01:03:40.634966 | orchestrator | Create admin user ------------------------------------------------------ 62.51s 2026-01-09 01:03:40.634970 | orchestrator | Restart ceph manager service ------------------------------------------- 24.22s 2026-01-09 01:03:40.634974 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.06s 2026-01-09 01:03:40.634978 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.60s 2026-01-09 01:03:40.634981 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.41s 2026-01-09 01:03:40.634985 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.26s 2026-01-09 01:03:40.634989 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.24s 2026-01-09 01:03:40.634993 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.21s 2026-01-09 01:03:40.634997 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.08s 2026-01-09 01:03:40.635001 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.06s 2026-01-09 01:03:40.635008 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2026-01-09 01:03:40.635071 | orchestrator | 2026-01-09 01:03:40 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state STARTED 2026-01-09 01:03:40.635077 | orchestrator | 2026-01-09 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:43.667970 | orchestrator | 2026-01-09 01:03:43 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:43.668578 | orchestrator | 2026-01-09 01:03:43 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:03:43.670004 | orchestrator | 2026-01-09 01:03:43 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:43.670128 | orchestrator | 2026-01-09 01:03:43 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:43.671536 | orchestrator | 2026-01-09 01:03:43 | INFO  | Task 02bde24d-9236-4fd3-877b-10324ddf2cf5 is in state SUCCESS 2026-01-09 01:03:43.671624 | orchestrator | 2026-01-09 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:43.672940 | orchestrator | 2026-01-09 01:03:43.672972 | orchestrator | 2026-01-09 01:03:43.672978 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:03:43.672984 | orchestrator | 2026-01-09 01:03:43.672989 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:03:43.672994 | orchestrator | Friday 09 January 2026 01:01:28 +0000 (0:00:00.245) 0:00:00.245 ******** 2026-01-09 01:03:43.672999 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:03:43.673005 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:03:43.673009 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:03:43.673013 | orchestrator | 2026-01-09 01:03:43.673018 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:03:43.673022 | orchestrator | Friday 09 January 2026 01:01:29 +0000 (0:00:00.316) 0:00:00.562 ******** 2026-01-09 01:03:43.673027 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-09 01:03:43.673032 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-09 01:03:43.673036 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-09 01:03:43.673040 | orchestrator | 2026-01-09 01:03:43.673044 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-09 01:03:43.673048 | orchestrator | 2026-01-09 01:03:43.673071 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-09 01:03:43.673076 | orchestrator | Friday 09 January 2026 01:01:29 +0000 (0:00:00.332) 0:00:00.895 ******** 2026-01-09 01:03:43.673080 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:03:43.673086 | orchestrator | 2026-01-09 01:03:43.673090 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-09 01:03:43.673094 | orchestrator | Friday 09 January 2026 01:01:29 +0000 (0:00:00.384) 0:00:01.280 ******** 2026-01-09 01:03:43.673099 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-09 01:03:43.673103 | orchestrator | 2026-01-09 01:03:43.673107 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-01-09 01:03:43.673111 | orchestrator | Friday 09 January 2026 01:01:33 +0000 (0:00:03.899) 0:00:05.179 ******** 2026-01-09 01:03:43.673115 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-09 01:03:43.673119 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-09 01:03:43.673123 | orchestrator | 2026-01-09 01:03:43.673127 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-09 01:03:43.673131 | orchestrator | Friday 09 January 2026 01:01:41 +0000 (0:00:07.702) 0:00:12.882 ******** 2026-01-09 01:03:43.673191 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-09 01:03:43.673197 | orchestrator | 2026-01-09 01:03:43.673204 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-09 01:03:43.673210 | orchestrator | Friday 09 January 2026 01:01:45 +0000 (0:00:03.909) 0:00:16.791 ******** 2026-01-09 01:03:43.673216 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-09 01:03:43.673224 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-09 01:03:43.673233 | orchestrator | 2026-01-09 01:03:43.673241 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-09 01:03:43.673246 | orchestrator | Friday 09 January 2026 01:01:49 +0000 (0:00:04.340) 0:00:21.132 ******** 2026-01-09 01:03:43.673252 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-09 01:03:43.673259 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-09 01:03:43.673266 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-09 01:03:43.673272 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-09 01:03:43.673278 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-09 01:03:43.673283 | orchestrator | 2026-01-09 01:03:43.673289 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-01-09 01:03:43.673294 | orchestrator | Friday 09 January 2026 01:02:08 +0000 (0:00:18.703) 0:00:39.836 ******** 2026-01-09 01:03:43.673299 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-09 01:03:43.673305 | orchestrator | 2026-01-09 01:03:43.673312 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-09 01:03:43.673317 | orchestrator | Friday 09 January 2026 01:02:12 +0000 (0:00:04.303) 0:00:44.139 ******** 2026-01-09 01:03:43.673326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.673348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.673362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.673384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673553 | orchestrator | 2026-01-09 01:03:43.673560 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-09 01:03:43.673568 | orchestrator | Friday 09 January 2026 01:02:15 +0000 (0:00:02.470) 0:00:46.610 ******** 2026-01-09 01:03:43.673573 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-09 01:03:43.673580 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-09 01:03:43.673585 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-09 01:03:43.673591 | orchestrator | 2026-01-09 01:03:43.673596 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-09 01:03:43.673604 | orchestrator | Friday 09 January 2026 01:02:16 +0000 (0:00:01.304) 0:00:47.914 ******** 2026-01-09 01:03:43.673628 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:03:43.673636 | orchestrator | 2026-01-09 01:03:43.673642 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-09 01:03:43.673648 | orchestrator | Friday 09 January 2026 01:02:16 +0000 (0:00:00.126) 0:00:48.041 ******** 2026-01-09 01:03:43.673654 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:03:43.673660 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:03:43.673665 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:03:43.673671 | orchestrator | 2026-01-09 01:03:43.673677 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-09 01:03:43.673682 | orchestrator | Friday 09 January 2026 01:02:17 +0000 (0:00:00.408) 0:00:48.449 ******** 2026-01-09 01:03:43.673689 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:03:43.673697 | orchestrator | 2026-01-09 01:03:43.673703 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-09 01:03:43.673709 | orchestrator | Friday 09 January 2026 01:02:17 +0000 (0:00:00.514) 0:00:48.963 ******** 2026-01-09 01:03:43.673717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.673733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.673752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.673759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.673823 | orchestrator | 2026-01-09 01:03:43.673829 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-09 01:03:43.673836 | orchestrator | Friday 09 January 2026 01:02:21 +0000 (0:00:03.734) 0:00:52.698 ******** 2026-01-09 01:03:43.673843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 01:03:43.673850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.673860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.673866 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:03:43.673878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 01:03:43.673892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.673904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.673911 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:03:43.673917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 01:03:43.673923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.673931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.673937 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:03:43.673943 | orchestrator | 2026-01-09 01:03:43.673948 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-09 01:03:43.673958 | orchestrator | Friday 09 January 2026 01:02:22 +0000 (0:00:01.561) 0:00:54.260 ******** 2026-01-09 01:03:43.673971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 01:03:43.673982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.673989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.673995 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:03:43.674001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 01:03:43.674006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.674058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.674068 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:03:43.674090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 01:03:43.674099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.674105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.674111 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:03:43.674118 | orchestrator | 2026-01-09 01:03:43.674124 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-09 01:03:43.674130 | orchestrator | Friday 09 January 2026 01:02:24 +0000 (0:00:02.022) 0:00:56.282 ******** 2026-01-09 01:03:43.674135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.674594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.674727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.674755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.674769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.674782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.674794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.674846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.674860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.674872 | orchestrator | 2026-01-09 01:03:43.674885 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-09 01:03:43.674897 | orchestrator | Friday 09 January 2026 01:02:29 +0000 (0:00:04.107) 0:01:00.390 ******** 2026-01-09 01:03:43.674908 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:03:43.674926 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:03:43.674936 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:03:43.674947 | orchestrator | 2026-01-09 01:03:43.674960 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-09 01:03:43.674980 | orchestrator | Friday 09 January 2026 01:02:31 +0000 (0:00:02.391) 0:01:02.782 ******** 2026-01-09 01:03:43.674999 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 01:03:43.675017 | orchestrator | 2026-01-09 01:03:43.675035 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-09 01:03:43.675053 | orchestrator | Friday 09 January 2026 01:02:32 +0000 (0:00:01.217) 0:01:04.000 ******** 2026-01-09 01:03:43.675073 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:03:43.675093 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:03:43.675111 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:03:43.675124 | orchestrator | 2026-01-09 01:03:43.675138 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-09 01:03:43.675177 | orchestrator | Friday 09 January 2026 01:02:33 +0000 (0:00:00.734) 0:01:04.734 ******** 2026-01-09 01:03:43.675193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.675218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.675249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.675291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675420 | orchestrator | 2026-01-09 01:03:43.675438 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-09 01:03:43.675466 | orchestrator | Friday 09 January 2026 01:02:43 +0000 (0:00:10.137) 0:01:14.872 ******** 2026-01-09 01:03:43.675494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 01:03:43.675513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.675532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.675565 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:03:43.675584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 01:03:43.675604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.675637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.675658 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:03:43.675687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-09 01:03:43.675700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.675722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:03:43.675734 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:03:43.675745 | orchestrator | 2026-01-09 01:03:43.675757 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-01-09 01:03:43.675768 | orchestrator | Friday 09 January 2026 01:02:45 +0000 (0:00:01.646) 0:01:16.519 ******** 2026-01-09 01:03:43.675780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.675801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.675819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-09 01:03:43.675831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:03:43.675926 | orchestrator | 2026-01-09 01:03:43.675938 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-09 01:03:43.675948 | orchestrator | Friday 09 January 2026 01:02:49 +0000 (0:00:04.573) 0:01:21.092 ******** 2026-01-09 01:03:43.675960 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:03:43.675971 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:03:43.676009 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:03:43.676033 | orchestrator | 2026-01-09 01:03:43.676044 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-09 01:03:43.676055 | orchestrator | Friday 09 January 2026 01:02:50 +0000 (0:00:00.281) 0:01:21.374 ******** 2026-01-09 01:03:43.676067 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:03:43.676078 | orchestrator | 2026-01-09 01:03:43.676090 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-09 01:03:43.676101 | orchestrator | Friday 09 January 2026 01:02:52 +0000 (0:00:02.240) 0:01:23.614 ******** 2026-01-09 01:03:43.676112 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:03:43.676146 | orchestrator | 2026-01-09 01:03:43.676247 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-09 01:03:43.676260 | orchestrator | Friday 09 January 2026 01:02:54 +0000 (0:00:02.613) 0:01:26.227 ******** 2026-01-09 01:03:43.676271 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:03:43.676282 | orchestrator | 2026-01-09 01:03:43.676294 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-09 01:03:43.676305 | orchestrator | Friday 09 January 2026 01:03:07 +0000 (0:00:12.583) 0:01:38.811 ******** 2026-01-09 01:03:43.676316 | orchestrator | 2026-01-09 01:03:43.676327 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-09 01:03:43.676338 | orchestrator | Friday 09 January 2026 01:03:07 +0000 (0:00:00.173) 0:01:38.984 ******** 2026-01-09 01:03:43.676349 | orchestrator | 2026-01-09 01:03:43.676360 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-09 01:03:43.676371 | orchestrator | Friday 09 January 2026 01:03:07 +0000 (0:00:00.200) 0:01:39.185 ******** 2026-01-09 01:03:43.676382 | orchestrator | 2026-01-09 01:03:43.676393 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-09 01:03:43.676404 | orchestrator | Friday 09 January 2026 01:03:07 +0000 (0:00:00.153) 0:01:39.338 ******** 2026-01-09 01:03:43.676415 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:03:43.676426 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:03:43.676437 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:03:43.676448 | orchestrator | 2026-01-09 01:03:43.676459 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-09 01:03:43.676470 | orchestrator | Friday 09 January 2026 01:03:20 +0000 (0:00:12.376) 0:01:51.715 ******** 2026-01-09 01:03:43.676481 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:03:43.676492 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:03:43.676503 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:03:43.676514 | orchestrator | 2026-01-09 01:03:43.676525 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-09 01:03:43.676536 | orchestrator | Friday 09 January 2026 01:03:30 +0000 (0:00:09.967) 0:02:01.682 ******** 2026-01-09 01:03:43.676547 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:03:43.676559 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:03:43.676570 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:03:43.676581 | orchestrator | 2026-01-09 01:03:43.676593 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:03:43.676606 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 01:03:43.676618 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-09 01:03:43.676630 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-09 01:03:43.676641 | orchestrator | 2026-01-09 01:03:43.676660 | orchestrator | 2026-01-09 01:03:43.676689 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:03:43.676712 | orchestrator | Friday 09 January 2026 01:03:42 +0000 (0:00:11.834) 0:02:13.517 ******** 2026-01-09 01:03:43.676756 | orchestrator | =============================================================================== 2026-01-09 01:03:43.676778 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.70s 2026-01-09 01:03:43.676815 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.58s 2026-01-09 01:03:43.676835 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.38s 2026-01-09 01:03:43.676856 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.83s 2026-01-09 01:03:43.676867 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.14s 2026-01-09 01:03:43.676878 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.97s 2026-01-09 01:03:43.676890 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.70s 2026-01-09 01:03:43.676903 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.57s 2026-01-09 01:03:43.676914 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.34s 2026-01-09 01:03:43.676925 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.30s 2026-01-09 01:03:43.676937 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.11s 2026-01-09 01:03:43.676948 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.91s 2026-01-09 01:03:43.676969 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.90s 2026-01-09 01:03:43.676981 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.73s 2026-01-09 01:03:43.676993 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.61s 2026-01-09 01:03:43.677004 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.47s 2026-01-09 01:03:43.677015 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.39s 2026-01-09 01:03:43.677027 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.24s 2026-01-09 01:03:43.677039 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.02s 2026-01-09 01:03:43.677050 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.65s 2026-01-09 01:03:46.701194 | orchestrator | 2026-01-09 01:03:46 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:46.701396 | orchestrator | 2026-01-09 01:03:46 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:03:46.702467 | orchestrator | 2026-01-09 01:03:46 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:46.703043 | orchestrator | 2026-01-09 01:03:46 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:46.703209 | orchestrator | 2026-01-09 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:49.724509 | orchestrator | 2026-01-09 01:03:49 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:49.725448 | orchestrator | 2026-01-09 01:03:49 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:03:49.726381 | orchestrator | 2026-01-09 01:03:49 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:49.728962 | orchestrator | 2026-01-09 01:03:49 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:49.729004 | orchestrator | 2026-01-09 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:52.756409 | orchestrator | 2026-01-09 01:03:52 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:52.756879 | orchestrator | 2026-01-09 01:03:52 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:03:52.757791 | orchestrator | 2026-01-09 01:03:52 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:52.759132 | orchestrator | 2026-01-09 01:03:52 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:52.759291 | orchestrator | 2026-01-09 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:55.789149 | orchestrator | 2026-01-09 01:03:55 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:55.790428 | orchestrator | 2026-01-09 01:03:55 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:03:55.792442 | orchestrator | 2026-01-09 01:03:55 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:55.794762 | orchestrator | 2026-01-09 01:03:55 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:55.794826 | orchestrator | 2026-01-09 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:03:58.825478 | orchestrator | 2026-01-09 01:03:58 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:03:58.827585 | orchestrator | 2026-01-09 01:03:58 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:03:58.829581 | orchestrator | 2026-01-09 01:03:58 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:03:58.831490 | orchestrator | 2026-01-09 01:03:58 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:03:58.831564 | orchestrator | 2026-01-09 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:01.877965 | orchestrator | 2026-01-09 01:04:01 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:04:01.879311 | orchestrator | 2026-01-09 01:04:01 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:01.882296 | orchestrator | 2026-01-09 01:04:01 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:04:01.884475 | orchestrator | 2026-01-09 01:04:01 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:01.884518 | orchestrator | 2026-01-09 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:04.938618 | orchestrator | 2026-01-09 01:04:04 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:04:04.941270 | orchestrator | 2026-01-09 01:04:04 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:04.943385 | orchestrator | 2026-01-09 01:04:04 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:04:04.945209 | orchestrator | 2026-01-09 01:04:04 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:04.945306 | orchestrator | 2026-01-09 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:08.001632 | orchestrator | 2026-01-09 01:04:07 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:04:08.009122 | orchestrator | 2026-01-09 01:04:08 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:08.012459 | orchestrator | 2026-01-09 01:04:08 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:04:08.015363 | orchestrator | 2026-01-09 01:04:08 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:08.016126 | orchestrator | 2026-01-09 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:11.060697 | orchestrator | 2026-01-09 01:04:11 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:04:11.062513 | orchestrator | 2026-01-09 01:04:11 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:11.064272 | orchestrator | 2026-01-09 01:04:11 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:04:11.065602 | orchestrator | 2026-01-09 01:04:11 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:11.065648 | orchestrator | 2026-01-09 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:14.106111 | orchestrator | 2026-01-09 01:04:14 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:04:14.108017 | orchestrator | 2026-01-09 01:04:14 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:14.109979 | orchestrator | 2026-01-09 01:04:14 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:04:14.114113 | orchestrator | 2026-01-09 01:04:14 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:14.114213 | orchestrator | 2026-01-09 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:17.147019 | orchestrator | 2026-01-09 01:04:17 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:04:17.148477 | orchestrator | 2026-01-09 01:04:17 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:17.149980 | orchestrator | 2026-01-09 01:04:17 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state STARTED 2026-01-09 01:04:17.152890 | orchestrator | 2026-01-09 01:04:17 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:17.153028 | orchestrator | 2026-01-09 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:20.192315 | orchestrator | 2026-01-09 01:04:20 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:04:20.192405 | orchestrator | 2026-01-09 01:04:20 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:20.217103 | orchestrator | 2026-01-09 01:04:20.217262 | orchestrator | 2026-01-09 01:04:20.217278 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:04:20.217286 | orchestrator | 2026-01-09 01:04:20.217292 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:04:20.217299 | orchestrator | Friday 09 January 2026 01:02:56 +0000 (0:00:00.626) 0:00:00.626 ******** 2026-01-09 01:04:20.217305 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:04:20.217313 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:04:20.217319 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:04:20.217325 | orchestrator | 2026-01-09 01:04:20.217332 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:04:20.217338 | orchestrator | Friday 09 January 2026 01:02:56 +0000 (0:00:00.389) 0:00:01.016 ******** 2026-01-09 01:04:20.217346 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-09 01:04:20.217353 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-09 01:04:20.217359 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-09 01:04:20.217365 | orchestrator | 2026-01-09 01:04:20.217372 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-09 01:04:20.217378 | orchestrator | 2026-01-09 01:04:20.217384 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-09 01:04:20.217391 | orchestrator | Friday 09 January 2026 01:02:57 +0000 (0:00:00.492) 0:00:01.508 ******** 2026-01-09 01:04:20.217412 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:04:20.217422 | orchestrator | 2026-01-09 01:04:20.217428 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-09 01:04:20.217467 | orchestrator | Friday 09 January 2026 01:02:58 +0000 (0:00:01.317) 0:00:02.825 ******** 2026-01-09 01:04:20.217474 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-09 01:04:20.217481 | orchestrator | 2026-01-09 01:04:20.217488 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-01-09 01:04:20.217494 | orchestrator | Friday 09 January 2026 01:03:02 +0000 (0:00:03.722) 0:00:06.548 ******** 2026-01-09 01:04:20.217501 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-09 01:04:20.217508 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-09 01:04:20.217514 | orchestrator | 2026-01-09 01:04:20.217520 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-09 01:04:20.217527 | orchestrator | Friday 09 January 2026 01:03:09 +0000 (0:00:07.657) 0:00:14.206 ******** 2026-01-09 01:04:20.217534 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-09 01:04:20.217540 | orchestrator | 2026-01-09 01:04:20.217547 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-09 01:04:20.217553 | orchestrator | Friday 09 January 2026 01:03:14 +0000 (0:00:04.312) 0:00:18.518 ******** 2026-01-09 01:04:20.217560 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-09 01:04:20.217566 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-09 01:04:20.217572 | orchestrator | 2026-01-09 01:04:20.217578 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-09 01:04:20.217584 | orchestrator | Friday 09 January 2026 01:03:18 +0000 (0:00:04.728) 0:00:23.247 ******** 2026-01-09 01:04:20.217591 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-09 01:04:20.217597 | orchestrator | 2026-01-09 01:04:20.217604 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-01-09 01:04:20.217610 | orchestrator | Friday 09 January 2026 01:03:23 +0000 (0:00:04.386) 0:00:27.633 ******** 2026-01-09 01:04:20.217616 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-09 01:04:20.217622 | orchestrator | 2026-01-09 01:04:20.217629 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-09 01:04:20.217635 | orchestrator | Friday 09 January 2026 01:03:27 +0000 (0:00:03.705) 0:00:31.338 ******** 2026-01-09 01:04:20.217642 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:20.217648 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:04:20.217655 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:04:20.217661 | orchestrator | 2026-01-09 01:04:20.217667 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-09 01:04:20.217674 | orchestrator | Friday 09 January 2026 01:03:27 +0000 (0:00:00.258) 0:00:31.597 ******** 2026-01-09 01:04:20.217684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.217710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.217728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.217736 | orchestrator | 2026-01-09 01:04:20.217742 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-09 01:04:20.217749 | orchestrator | Friday 09 January 2026 01:03:28 +0000 (0:00:00.985) 0:00:32.582 ******** 2026-01-09 01:04:20.217756 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:20.217762 | orchestrator | 2026-01-09 01:04:20.217769 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-09 01:04:20.217776 | orchestrator | Friday 09 January 2026 01:03:28 +0000 (0:00:00.350) 0:00:32.933 ******** 2026-01-09 01:04:20.217782 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:20.217789 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:04:20.217796 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:04:20.217802 | orchestrator | 2026-01-09 01:04:20.217809 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-09 01:04:20.217816 | orchestrator | Friday 09 January 2026 01:03:29 +0000 (0:00:00.695) 0:00:33.628 ******** 2026-01-09 01:04:20.217821 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:04:20.217828 | orchestrator | 2026-01-09 01:04:20.217835 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-09 01:04:20.217841 | orchestrator | Friday 09 January 2026 01:03:29 +0000 (0:00:00.588) 0:00:34.217 ******** 2026-01-09 01:04:20.217849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.217863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.217878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.217886 | orchestrator | 2026-01-09 01:04:20.217892 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-09 01:04:20.217899 | orchestrator | Friday 09 January 2026 01:03:31 +0000 (0:00:01.699) 0:00:35.916 ******** 2026-01-09 01:04:20.217905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 01:04:20.217912 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:20.217918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 01:04:20.217925 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:04:20.217936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 01:04:20.217947 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:04:20.217953 | orchestrator | 2026-01-09 01:04:20.217959 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-09 01:04:20.217966 | orchestrator | Friday 09 January 2026 01:03:33 +0000 (0:00:02.122) 0:00:38.039 ******** 2026-01-09 01:04:20.217975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 01:04:20.217982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 01:04:20.217989 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:04:20.217995 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:20.218001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 01:04:20.218012 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:04:20.218063 | orchestrator | 2026-01-09 01:04:20.218070 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-09 01:04:20.218076 | orchestrator | Friday 09 January 2026 01:03:34 +0000 (0:00:01.121) 0:00:39.161 ******** 2026-01-09 01:04:20.218088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.218095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.218106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.218112 | orchestrator | 2026-01-09 01:04:20.218119 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-09 01:04:20.218125 | orchestrator | Friday 09 January 2026 01:03:37 +0000 (0:00:02.270) 0:00:41.431 ******** 2026-01-09 01:04:20.218132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.218144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.218155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.218162 | orchestrator | 2026-01-09 01:04:20.218169 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-09 01:04:20.218175 | orchestrator | Friday 09 January 2026 01:03:40 +0000 (0:00:03.757) 0:00:45.188 ******** 2026-01-09 01:04:20.218181 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-09 01:04:20.218188 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-09 01:04:20.218220 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-09 01:04:20.218227 | orchestrator | 2026-01-09 01:04:20.218233 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-09 01:04:20.218240 | orchestrator | Friday 09 January 2026 01:03:42 +0000 (0:00:01.846) 0:00:47.035 ******** 2026-01-09 01:04:20.218246 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:20.218252 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:04:20.218258 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:04:20.218264 | orchestrator | 2026-01-09 01:04:20.218270 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-09 01:04:20.218276 | orchestrator | Friday 09 January 2026 01:03:45 +0000 (0:00:02.687) 0:00:49.723 ******** 2026-01-09 01:04:20.218282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 01:04:20.218294 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:20.218300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 01:04:20.218306 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:04:20.218317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-09 01:04:20.218324 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:04:20.218330 | orchestrator | 2026-01-09 01:04:20.218337 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-01-09 01:04:20.218343 | orchestrator | Friday 09 January 2026 01:03:46 +0000 (0:00:01.094) 0:00:50.817 ******** 2026-01-09 01:04:20.218360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.218366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.218387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-09 01:04:20.218394 | orchestrator | 2026-01-09 01:04:20.218406 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-09 01:04:20.218412 | orchestrator | Friday 09 January 2026 01:03:48 +0000 (0:00:01.814) 0:00:52.631 ******** 2026-01-09 01:04:20.218418 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:20.218424 | orchestrator | 2026-01-09 01:04:20.218431 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-09 01:04:20.218437 | orchestrator | Friday 09 January 2026 01:03:51 +0000 (0:00:03.512) 0:00:56.144 ******** 2026-01-09 01:04:20.218443 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:20.218449 | orchestrator | 2026-01-09 01:04:20.218455 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-09 01:04:20.218461 | orchestrator | Friday 09 January 2026 01:03:55 +0000 (0:00:03.267) 0:00:59.412 ******** 2026-01-09 01:04:20.218468 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:20.218474 | orchestrator | 2026-01-09 01:04:20.218480 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-09 01:04:20.218486 | orchestrator | Friday 09 January 2026 01:04:11 +0000 (0:00:16.125) 0:01:15.537 ******** 2026-01-09 01:04:20.218492 | orchestrator | 2026-01-09 01:04:20.218499 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-09 01:04:20.218505 | orchestrator | Friday 09 January 2026 01:04:11 +0000 (0:00:00.071) 0:01:15.608 ******** 2026-01-09 01:04:20.218511 | orchestrator | 2026-01-09 01:04:20.218520 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-09 01:04:20.218527 | orchestrator | Friday 09 January 2026 01:04:11 +0000 (0:00:00.063) 0:01:15.671 ******** 2026-01-09 01:04:20.218533 | orchestrator | 2026-01-09 01:04:20.218539 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-09 01:04:20.218545 | orchestrator | Friday 09 January 2026 01:04:11 +0000 (0:00:00.067) 0:01:15.739 ******** 2026-01-09 01:04:20.218551 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:04:20.218557 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:04:20.218563 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:20.218569 | orchestrator | 2026-01-09 01:04:20.218575 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:04:20.218582 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-09 01:04:20.218592 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-09 01:04:20.218598 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-09 01:04:20.218609 | orchestrator | 2026-01-09 01:04:20.218615 | orchestrator | 2026-01-09 01:04:20.218626 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:04:20.218632 | orchestrator | Friday 09 January 2026 01:04:18 +0000 (0:00:07.288) 0:01:23.028 ******** 2026-01-09 01:04:20.218638 | orchestrator | =============================================================================== 2026-01-09 01:04:20.218644 | orchestrator | placement : Running placement bootstrap container ---------------------- 16.13s 2026-01-09 01:04:20.218650 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.66s 2026-01-09 01:04:20.218656 | orchestrator | placement : Restart placement-api container ----------------------------- 7.29s 2026-01-09 01:04:20.218662 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.73s 2026-01-09 01:04:20.218668 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.39s 2026-01-09 01:04:20.218674 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.31s 2026-01-09 01:04:20.218680 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.76s 2026-01-09 01:04:20.218687 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.72s 2026-01-09 01:04:20.218693 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.71s 2026-01-09 01:04:20.218699 | orchestrator | placement : Creating placement databases -------------------------------- 3.51s 2026-01-09 01:04:20.218705 | orchestrator | placement : Creating placement databases user and setting permissions --- 3.27s 2026-01-09 01:04:20.218712 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.69s 2026-01-09 01:04:20.218718 | orchestrator | placement : Copying over config.json files for services ----------------- 2.27s 2026-01-09 01:04:20.218724 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 2.12s 2026-01-09 01:04:20.218730 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.85s 2026-01-09 01:04:20.218736 | orchestrator | placement : Check placement containers ---------------------------------- 1.81s 2026-01-09 01:04:20.218742 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.70s 2026-01-09 01:04:20.218748 | orchestrator | placement : include_tasks ----------------------------------------------- 1.32s 2026-01-09 01:04:20.218754 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.12s 2026-01-09 01:04:20.218760 | orchestrator | placement : Copying over existing policy file --------------------------- 1.09s 2026-01-09 01:04:20.218766 | orchestrator | 2026-01-09 01:04:20 | INFO  | Task 88234be6-980a-4993-b568-a1eb4a9610b6 is in state SUCCESS 2026-01-09 01:04:20.218773 | orchestrator | 2026-01-09 01:04:20 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:20.218779 | orchestrator | 2026-01-09 01:04:20 | INFO  | Task 2b415a9e-4996-4286-8cb8-0cd5719cb085 is in state STARTED 2026-01-09 01:04:20.218786 | orchestrator | 2026-01-09 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:23.235636 | orchestrator | 2026-01-09 01:04:23 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:04:23.238008 | orchestrator | 2026-01-09 01:04:23 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:23.240359 | orchestrator | 2026-01-09 01:04:23 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:23.242248 | orchestrator | 2026-01-09 01:04:23 | INFO  | Task 2b415a9e-4996-4286-8cb8-0cd5719cb085 is in state STARTED 2026-01-09 01:04:23.242348 | orchestrator | 2026-01-09 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:26.275653 | orchestrator | 2026-01-09 01:04:26 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:26.279787 | orchestrator | 2026-01-09 01:04:26 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:04:26.282475 | orchestrator | 2026-01-09 01:04:26 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:26.284563 | orchestrator | 2026-01-09 01:04:26 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:26.285761 | orchestrator | 2026-01-09 01:04:26 | INFO  | Task 2b415a9e-4996-4286-8cb8-0cd5719cb085 is in state SUCCESS 2026-01-09 01:04:26.285896 | orchestrator | 2026-01-09 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:29.339469 | orchestrator | 2026-01-09 01:04:29 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:29.341646 | orchestrator | 2026-01-09 01:04:29 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:04:29.344437 | orchestrator | 2026-01-09 01:04:29 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:29.346000 | orchestrator | 2026-01-09 01:04:29 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:29.346074 | orchestrator | 2026-01-09 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:32.388634 | orchestrator | 2026-01-09 01:04:32 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:32.388776 | orchestrator | 2026-01-09 01:04:32 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state STARTED 2026-01-09 01:04:32.389939 | orchestrator | 2026-01-09 01:04:32 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:32.390656 | orchestrator | 2026-01-09 01:04:32 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:32.390695 | orchestrator | 2026-01-09 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:35.420009 | orchestrator | 2026-01-09 01:04:35 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:35.422632 | orchestrator | 2026-01-09 01:04:35 | INFO  | Task d5c33316-c8f6-42af-818c-07595d2e34f6 is in state SUCCESS 2026-01-09 01:04:35.424017 | orchestrator | 2026-01-09 01:04:35.424084 | orchestrator | 2026-01-09 01:04:35.424097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:04:35.424105 | orchestrator | 2026-01-09 01:04:35.424111 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:04:35.424117 | orchestrator | Friday 09 January 2026 01:04:22 +0000 (0:00:00.157) 0:00:00.157 ******** 2026-01-09 01:04:35.424124 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:04:35.424132 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:04:35.424138 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:04:35.424143 | orchestrator | 2026-01-09 01:04:35.424150 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:04:35.424156 | orchestrator | Friday 09 January 2026 01:04:23 +0000 (0:00:00.273) 0:00:00.431 ******** 2026-01-09 01:04:35.424164 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-09 01:04:35.424171 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-09 01:04:35.424177 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-09 01:04:35.424184 | orchestrator | 2026-01-09 01:04:35.424190 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-09 01:04:35.424196 | orchestrator | 2026-01-09 01:04:35.424202 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-09 01:04:35.424209 | orchestrator | Friday 09 January 2026 01:04:23 +0000 (0:00:00.565) 0:00:00.997 ******** 2026-01-09 01:04:35.424300 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:04:35.424307 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:04:35.424313 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:04:35.424320 | orchestrator | 2026-01-09 01:04:35.424379 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:04:35.424388 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:04:35.424398 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:04:35.424406 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:04:35.424412 | orchestrator | 2026-01-09 01:04:35.424419 | orchestrator | 2026-01-09 01:04:35.424425 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:04:35.424432 | orchestrator | Friday 09 January 2026 01:04:24 +0000 (0:00:00.667) 0:00:01.664 ******** 2026-01-09 01:04:35.424438 | orchestrator | =============================================================================== 2026-01-09 01:04:35.424454 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.67s 2026-01-09 01:04:35.424461 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-01-09 01:04:35.424468 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-01-09 01:04:35.424474 | orchestrator | 2026-01-09 01:04:35.424480 | orchestrator | 2026-01-09 01:04:35.424487 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:04:35.424493 | orchestrator | 2026-01-09 01:04:35.424500 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:04:35.424506 | orchestrator | Friday 09 January 2026 01:01:28 +0000 (0:00:00.354) 0:00:00.354 ******** 2026-01-09 01:04:35.424553 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:04:35.424579 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:04:35.424585 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:04:35.424591 | orchestrator | 2026-01-09 01:04:35.424597 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:04:35.424603 | orchestrator | Friday 09 January 2026 01:01:28 +0000 (0:00:00.291) 0:00:00.645 ******** 2026-01-09 01:04:35.424610 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-09 01:04:35.424616 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-09 01:04:35.424622 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-09 01:04:35.424628 | orchestrator | 2026-01-09 01:04:35.424634 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-09 01:04:35.424640 | orchestrator | 2026-01-09 01:04:35.424646 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-09 01:04:35.424652 | orchestrator | Friday 09 January 2026 01:01:28 +0000 (0:00:00.421) 0:00:01.066 ******** 2026-01-09 01:04:35.424659 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:04:35.424666 | orchestrator | 2026-01-09 01:04:35.424671 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-09 01:04:35.424678 | orchestrator | Friday 09 January 2026 01:01:29 +0000 (0:00:00.534) 0:00:01.600 ******** 2026-01-09 01:04:35.424697 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-09 01:04:35.424703 | orchestrator | 2026-01-09 01:04:35.424708 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-01-09 01:04:35.424713 | orchestrator | Friday 09 January 2026 01:01:33 +0000 (0:00:03.860) 0:00:05.461 ******** 2026-01-09 01:04:35.424719 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-09 01:04:35.424726 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-09 01:04:35.424731 | orchestrator | 2026-01-09 01:04:35.424736 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-09 01:04:35.424742 | orchestrator | Friday 09 January 2026 01:01:41 +0000 (0:00:08.558) 0:00:14.020 ******** 2026-01-09 01:04:35.424757 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-09 01:04:35.424763 | orchestrator | 2026-01-09 01:04:35.424769 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-09 01:04:35.424775 | orchestrator | Friday 09 January 2026 01:01:45 +0000 (0:00:03.548) 0:00:17.568 ******** 2026-01-09 01:04:35.424797 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-09 01:04:35.424804 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-09 01:04:35.424810 | orchestrator | 2026-01-09 01:04:35.424816 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-09 01:04:35.424822 | orchestrator | Friday 09 January 2026 01:01:49 +0000 (0:00:04.325) 0:00:21.893 ******** 2026-01-09 01:04:35.424827 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-09 01:04:35.424834 | orchestrator | 2026-01-09 01:04:35.424839 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-01-09 01:04:35.424845 | orchestrator | Friday 09 January 2026 01:01:53 +0000 (0:00:03.755) 0:00:25.649 ******** 2026-01-09 01:04:35.424851 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-09 01:04:35.424856 | orchestrator | 2026-01-09 01:04:35.424861 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-09 01:04:35.424866 | orchestrator | Friday 09 January 2026 01:01:57 +0000 (0:00:04.282) 0:00:29.931 ******** 2026-01-09 01:04:35.424876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.424889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.424900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.424914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.424928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.424934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.424940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.424949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.424955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.424976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.424994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425065 | orchestrator | 2026-01-09 01:04:35.425071 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-09 01:04:35.425077 | orchestrator | Friday 09 January 2026 01:02:00 +0000 (0:00:03.200) 0:00:33.132 ******** 2026-01-09 01:04:35.425084 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:35.425091 | orchestrator | 2026-01-09 01:04:35.425096 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-09 01:04:35.425101 | orchestrator | Friday 09 January 2026 01:02:01 +0000 (0:00:00.132) 0:00:33.265 ******** 2026-01-09 01:04:35.425107 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:35.425113 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:04:35.425118 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:04:35.425124 | orchestrator | 2026-01-09 01:04:35.425130 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-09 01:04:35.425135 | orchestrator | Friday 09 January 2026 01:02:01 +0000 (0:00:00.339) 0:00:33.604 ******** 2026-01-09 01:04:35.425141 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:04:35.425147 | orchestrator | 2026-01-09 01:04:35.425152 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-09 01:04:35.425158 | orchestrator | Friday 09 January 2026 01:02:02 +0000 (0:00:00.748) 0:00:34.352 ******** 2026-01-09 01:04:35.425164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.425172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.425188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.425200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.425355 | orchestrator | 2026-01-09 01:04:35.425361 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-09 01:04:35.425368 | orchestrator | Friday 09 January 2026 01:02:09 +0000 (0:00:07.187) 0:00:41.540 ******** 2026-01-09 01:04:35.425375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.425382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 01:04:35.425394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425765 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:35.425769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.425774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 01:04:35.425784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425810 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:04:35.425814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.425819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 01:04:35.425826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425846 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:04:35.425850 | orchestrator | 2026-01-09 01:04:35.425856 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-09 01:04:35.425861 | orchestrator | Friday 09 January 2026 01:02:10 +0000 (0:00:00.938) 0:00:42.478 ******** 2026-01-09 01:04:35.425865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.425873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 01:04:35.425877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425898 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:35.425906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.425913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 01:04:35.425917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425936 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:04:35.425944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.425966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 01:04:35.425971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.425989 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:04:35.425993 | orchestrator | 2026-01-09 01:04:35.425997 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-09 01:04:35.426001 | orchestrator | Friday 09 January 2026 01:02:13 +0000 (0:00:03.205) 0:00:45.684 ******** 2026-01-09 01:04:35.426008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.426056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.426063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.426067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426186 | orchestrator | 2026-01-09 01:04:35.426196 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-09 01:04:35.426202 | orchestrator | Friday 09 January 2026 01:02:20 +0000 (0:00:07.267) 0:00:52.951 ******** 2026-01-09 01:04:35.426230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.426243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.426249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.426256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426380 | orchestrator | 2026-01-09 01:04:35.426387 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-09 01:04:35.426393 | orchestrator | Friday 09 January 2026 01:02:41 +0000 (0:00:20.838) 0:01:13.789 ******** 2026-01-09 01:04:35.426400 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-09 01:04:35.426407 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-09 01:04:35.426413 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-09 01:04:35.426426 | orchestrator | 2026-01-09 01:04:35.426437 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-09 01:04:35.426443 | orchestrator | Friday 09 January 2026 01:02:49 +0000 (0:00:08.307) 0:01:22.097 ******** 2026-01-09 01:04:35.426450 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-09 01:04:35.426457 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-09 01:04:35.426463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-09 01:04:35.426783 | orchestrator | 2026-01-09 01:04:35.426788 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-09 01:04:35.426792 | orchestrator | Friday 09 January 2026 01:02:53 +0000 (0:00:03.743) 0:01:25.840 ******** 2026-01-09 01:04:35.426803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.426809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.426813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.426818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426920 | orchestrator | 2026-01-09 01:04:35.426926 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-09 01:04:35.426935 | orchestrator | Friday 09 January 2026 01:02:57 +0000 (0:00:03.815) 0:01:29.656 ******** 2026-01-09 01:04:35.426943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.426948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.426952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.426956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.426987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.426991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427044 | orchestrator | 2026-01-09 01:04:35.427048 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-09 01:04:35.427052 | orchestrator | Friday 09 January 2026 01:03:00 +0000 (0:00:03.338) 0:01:32.995 ******** 2026-01-09 01:04:35.427056 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:35.427060 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:04:35.427064 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:04:35.427068 | orchestrator | 2026-01-09 01:04:35.427072 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-09 01:04:35.427076 | orchestrator | Friday 09 January 2026 01:03:01 +0000 (0:00:00.431) 0:01:33.426 ******** 2026-01-09 01:04:35.427080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.427087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 01:04:35.427091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427135 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:35.427141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.427150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 01:04:35.427154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427174 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:04:35.427181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-09 01:04:35.427187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-09 01:04:35.427191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:04:35.427210 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:04:35.427366 | orchestrator | 2026-01-09 01:04:35.427375 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-01-09 01:04:35.427380 | orchestrator | Friday 09 January 2026 01:03:02 +0000 (0:00:01.375) 0:01:34.802 ******** 2026-01-09 01:04:35.427391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.427403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.427408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-09 01:04:35.427418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:04:35.427536 | orchestrator | 2026-01-09 01:04:35.427540 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-09 01:04:35.427545 | orchestrator | Friday 09 January 2026 01:03:07 +0000 (0:00:05.285) 0:01:40.087 ******** 2026-01-09 01:04:35.427549 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:04:35.427553 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:04:35.427558 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:04:35.427562 | orchestrator | 2026-01-09 01:04:35.427567 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-09 01:04:35.427572 | orchestrator | Friday 09 January 2026 01:03:08 +0000 (0:00:00.546) 0:01:40.634 ******** 2026-01-09 01:04:35.427577 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-09 01:04:35.427582 | orchestrator | 2026-01-09 01:04:35.427586 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-09 01:04:35.427591 | orchestrator | Friday 09 January 2026 01:03:11 +0000 (0:00:02.775) 0:01:43.409 ******** 2026-01-09 01:04:35.427596 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-09 01:04:35.427601 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-09 01:04:35.427606 | orchestrator | 2026-01-09 01:04:35.427610 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-09 01:04:35.427615 | orchestrator | Friday 09 January 2026 01:03:14 +0000 (0:00:03.250) 0:01:46.660 ******** 2026-01-09 01:04:35.427619 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:35.427624 | orchestrator | 2026-01-09 01:04:35.427628 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-09 01:04:35.427636 | orchestrator | Friday 09 January 2026 01:03:31 +0000 (0:00:17.072) 0:02:03.733 ******** 2026-01-09 01:04:35.427640 | orchestrator | 2026-01-09 01:04:35.427645 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-09 01:04:35.427649 | orchestrator | Friday 09 January 2026 01:03:32 +0000 (0:00:00.569) 0:02:04.302 ******** 2026-01-09 01:04:35.427654 | orchestrator | 2026-01-09 01:04:35.427659 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-09 01:04:35.427668 | orchestrator | Friday 09 January 2026 01:03:32 +0000 (0:00:00.188) 0:02:04.490 ******** 2026-01-09 01:04:35.427673 | orchestrator | 2026-01-09 01:04:35.427678 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-09 01:04:35.427682 | orchestrator | Friday 09 January 2026 01:03:32 +0000 (0:00:00.209) 0:02:04.700 ******** 2026-01-09 01:04:35.427687 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:04:35.427696 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:04:35.427701 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:35.427706 | orchestrator | 2026-01-09 01:04:35.427710 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-09 01:04:35.427713 | orchestrator | Friday 09 January 2026 01:03:43 +0000 (0:00:10.488) 0:02:15.188 ******** 2026-01-09 01:04:35.427720 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:35.427724 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:04:35.427728 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:04:35.427732 | orchestrator | 2026-01-09 01:04:35.427736 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-09 01:04:35.427740 | orchestrator | Friday 09 January 2026 01:03:52 +0000 (0:00:09.124) 0:02:24.312 ******** 2026-01-09 01:04:35.427743 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:35.427747 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:04:35.427751 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:04:35.427755 | orchestrator | 2026-01-09 01:04:35.427758 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-09 01:04:35.427762 | orchestrator | Friday 09 January 2026 01:04:05 +0000 (0:00:13.007) 0:02:37.320 ******** 2026-01-09 01:04:35.427766 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:04:35.427770 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:04:35.427774 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:35.427777 | orchestrator | 2026-01-09 01:04:35.427781 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-09 01:04:35.427785 | orchestrator | Friday 09 January 2026 01:04:14 +0000 (0:00:09.336) 0:02:46.657 ******** 2026-01-09 01:04:35.427789 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:35.427793 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:04:35.427796 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:04:35.427800 | orchestrator | 2026-01-09 01:04:35.427804 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-09 01:04:35.427808 | orchestrator | Friday 09 January 2026 01:04:19 +0000 (0:00:05.469) 0:02:52.127 ******** 2026-01-09 01:04:35.427812 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:35.427815 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:04:35.427819 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:04:35.427823 | orchestrator | 2026-01-09 01:04:35.427827 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-09 01:04:35.427831 | orchestrator | Friday 09 January 2026 01:04:25 +0000 (0:00:05.245) 0:02:57.372 ******** 2026-01-09 01:04:35.427835 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:04:35.427838 | orchestrator | 2026-01-09 01:04:35.427842 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:04:35.427847 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 01:04:35.427851 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-09 01:04:35.427855 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-09 01:04:35.427859 | orchestrator | 2026-01-09 01:04:35.427863 | orchestrator | 2026-01-09 01:04:35.427867 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:04:35.427870 | orchestrator | Friday 09 January 2026 01:04:33 +0000 (0:00:07.817) 0:03:05.189 ******** 2026-01-09 01:04:35.427878 | orchestrator | =============================================================================== 2026-01-09 01:04:35.427883 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.84s 2026-01-09 01:04:35.427887 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.07s 2026-01-09 01:04:35.427890 | orchestrator | designate : Restart designate-central container ------------------------ 13.01s 2026-01-09 01:04:35.427894 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.49s 2026-01-09 01:04:35.427898 | orchestrator | designate : Restart designate-producer container ------------------------ 9.34s 2026-01-09 01:04:35.427902 | orchestrator | designate : Restart designate-api container ----------------------------- 9.12s 2026-01-09 01:04:35.427906 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 8.56s 2026-01-09 01:04:35.427909 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.31s 2026-01-09 01:04:35.427913 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.82s 2026-01-09 01:04:35.427917 | orchestrator | designate : Copying over config.json files for services ----------------- 7.27s 2026-01-09 01:04:35.427921 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.19s 2026-01-09 01:04:35.428013 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.47s 2026-01-09 01:04:35.428019 | orchestrator | designate : Check designate containers ---------------------------------- 5.29s 2026-01-09 01:04:35.428026 | orchestrator | designate : Restart designate-worker container -------------------------- 5.25s 2026-01-09 01:04:35.428030 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.33s 2026-01-09 01:04:35.428034 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.28s 2026-01-09 01:04:35.428038 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.86s 2026-01-09 01:04:35.428042 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.82s 2026-01-09 01:04:35.428046 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.76s 2026-01-09 01:04:35.428049 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.74s 2026-01-09 01:04:35.428054 | orchestrator | 2026-01-09 01:04:35 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:04:35.428058 | orchestrator | 2026-01-09 01:04:35 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:35.428064 | orchestrator | 2026-01-09 01:04:35 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:35.428109 | orchestrator | 2026-01-09 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:38.459012 | orchestrator | 2026-01-09 01:04:38 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:38.460321 | orchestrator | 2026-01-09 01:04:38 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:04:38.461188 | orchestrator | 2026-01-09 01:04:38 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:38.463039 | orchestrator | 2026-01-09 01:04:38 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:38.463104 | orchestrator | 2026-01-09 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:41.494614 | orchestrator | 2026-01-09 01:04:41 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:41.495313 | orchestrator | 2026-01-09 01:04:41 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:04:41.496021 | orchestrator | 2026-01-09 01:04:41 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:41.498320 | orchestrator | 2026-01-09 01:04:41 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:41.498433 | orchestrator | 2026-01-09 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:44.530574 | orchestrator | 2026-01-09 01:04:44 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:44.531142 | orchestrator | 2026-01-09 01:04:44 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:04:44.531672 | orchestrator | 2026-01-09 01:04:44 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:44.532416 | orchestrator | 2026-01-09 01:04:44 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:44.532446 | orchestrator | 2026-01-09 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:47.591699 | orchestrator | 2026-01-09 01:04:47 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:47.593686 | orchestrator | 2026-01-09 01:04:47 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:04:47.596482 | orchestrator | 2026-01-09 01:04:47 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:47.599569 | orchestrator | 2026-01-09 01:04:47 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:47.599638 | orchestrator | 2026-01-09 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:50.673280 | orchestrator | 2026-01-09 01:04:50 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:50.679194 | orchestrator | 2026-01-09 01:04:50 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:04:50.679895 | orchestrator | 2026-01-09 01:04:50 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:50.681392 | orchestrator | 2026-01-09 01:04:50 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:50.681426 | orchestrator | 2026-01-09 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:53.710200 | orchestrator | 2026-01-09 01:04:53 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:53.710865 | orchestrator | 2026-01-09 01:04:53 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:04:53.711448 | orchestrator | 2026-01-09 01:04:53 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:53.712110 | orchestrator | 2026-01-09 01:04:53 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:53.712259 | orchestrator | 2026-01-09 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:56.744453 | orchestrator | 2026-01-09 01:04:56 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:56.744964 | orchestrator | 2026-01-09 01:04:56 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:04:56.745798 | orchestrator | 2026-01-09 01:04:56 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:56.746749 | orchestrator | 2026-01-09 01:04:56 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:56.747736 | orchestrator | 2026-01-09 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:04:59.825210 | orchestrator | 2026-01-09 01:04:59 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:04:59.825800 | orchestrator | 2026-01-09 01:04:59 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:04:59.826663 | orchestrator | 2026-01-09 01:04:59 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:04:59.827581 | orchestrator | 2026-01-09 01:04:59 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:04:59.827601 | orchestrator | 2026-01-09 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:03.228866 | orchestrator | 2026-01-09 01:05:02 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:03.228929 | orchestrator | 2026-01-09 01:05:02 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:05:03.228938 | orchestrator | 2026-01-09 01:05:02 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:03.228945 | orchestrator | 2026-01-09 01:05:02 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:03.228951 | orchestrator | 2026-01-09 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:05.924539 | orchestrator | 2026-01-09 01:05:05 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:05.924612 | orchestrator | 2026-01-09 01:05:05 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:05:05.925675 | orchestrator | 2026-01-09 01:05:05 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:05.925709 | orchestrator | 2026-01-09 01:05:05 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:05.925715 | orchestrator | 2026-01-09 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:09.051988 | orchestrator | 2026-01-09 01:05:08 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:09.052054 | orchestrator | 2026-01-09 01:05:08 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:05:09.052063 | orchestrator | 2026-01-09 01:05:08 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:09.052071 | orchestrator | 2026-01-09 01:05:08 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:09.052075 | orchestrator | 2026-01-09 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:11.993065 | orchestrator | 2026-01-09 01:05:11 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:11.993397 | orchestrator | 2026-01-09 01:05:11 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state STARTED 2026-01-09 01:05:11.994552 | orchestrator | 2026-01-09 01:05:11 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:11.995056 | orchestrator | 2026-01-09 01:05:11 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:11.995093 | orchestrator | 2026-01-09 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:15.023374 | orchestrator | 2026-01-09 01:05:15 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:15.023882 | orchestrator | 2026-01-09 01:05:15 | INFO  | Task caa8cf72-93d9-40e8-b0b6-12d622df36ee is in state SUCCESS 2026-01-09 01:05:15.024076 | orchestrator | 2026-01-09 01:05:15 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:15.025447 | orchestrator | 2026-01-09 01:05:15 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:15.026084 | orchestrator | 2026-01-09 01:05:15 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:15.026147 | orchestrator | 2026-01-09 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:18.080993 | orchestrator | 2026-01-09 01:05:18 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:18.081034 | orchestrator | 2026-01-09 01:05:18 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:18.082168 | orchestrator | 2026-01-09 01:05:18 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:18.082197 | orchestrator | 2026-01-09 01:05:18 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:18.082203 | orchestrator | 2026-01-09 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:21.113154 | orchestrator | 2026-01-09 01:05:21 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:21.114361 | orchestrator | 2026-01-09 01:05:21 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:21.114940 | orchestrator | 2026-01-09 01:05:21 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:21.117177 | orchestrator | 2026-01-09 01:05:21 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:21.117222 | orchestrator | 2026-01-09 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:24.159778 | orchestrator | 2026-01-09 01:05:24 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:24.162795 | orchestrator | 2026-01-09 01:05:24 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:24.166063 | orchestrator | 2026-01-09 01:05:24 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:24.167723 | orchestrator | 2026-01-09 01:05:24 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:24.167988 | orchestrator | 2026-01-09 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:27.212403 | orchestrator | 2026-01-09 01:05:27 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:27.213655 | orchestrator | 2026-01-09 01:05:27 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:27.214577 | orchestrator | 2026-01-09 01:05:27 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:27.215549 | orchestrator | 2026-01-09 01:05:27 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:27.215662 | orchestrator | 2026-01-09 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:30.266601 | orchestrator | 2026-01-09 01:05:30 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:30.266904 | orchestrator | 2026-01-09 01:05:30 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:30.270237 | orchestrator | 2026-01-09 01:05:30 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:30.270722 | orchestrator | 2026-01-09 01:05:30 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:30.270771 | orchestrator | 2026-01-09 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:33.323592 | orchestrator | 2026-01-09 01:05:33 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:33.325859 | orchestrator | 2026-01-09 01:05:33 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:33.328468 | orchestrator | 2026-01-09 01:05:33 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:33.330925 | orchestrator | 2026-01-09 01:05:33 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:33.330969 | orchestrator | 2026-01-09 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:36.381568 | orchestrator | 2026-01-09 01:05:36 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:36.385517 | orchestrator | 2026-01-09 01:05:36 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:36.387403 | orchestrator | 2026-01-09 01:05:36 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:36.389338 | orchestrator | 2026-01-09 01:05:36 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:36.389385 | orchestrator | 2026-01-09 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:39.443686 | orchestrator | 2026-01-09 01:05:39 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:39.444061 | orchestrator | 2026-01-09 01:05:39 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:39.445899 | orchestrator | 2026-01-09 01:05:39 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:39.447708 | orchestrator | 2026-01-09 01:05:39 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:39.447760 | orchestrator | 2026-01-09 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:42.508434 | orchestrator | 2026-01-09 01:05:42 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:42.509820 | orchestrator | 2026-01-09 01:05:42 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:42.511740 | orchestrator | 2026-01-09 01:05:42 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:42.513728 | orchestrator | 2026-01-09 01:05:42 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:42.514170 | orchestrator | 2026-01-09 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:45.549866 | orchestrator | 2026-01-09 01:05:45 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:45.550522 | orchestrator | 2026-01-09 01:05:45 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:45.551613 | orchestrator | 2026-01-09 01:05:45 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:45.553804 | orchestrator | 2026-01-09 01:05:45 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:45.553874 | orchestrator | 2026-01-09 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:48.603525 | orchestrator | 2026-01-09 01:05:48 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:48.603788 | orchestrator | 2026-01-09 01:05:48 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:48.604550 | orchestrator | 2026-01-09 01:05:48 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:48.605252 | orchestrator | 2026-01-09 01:05:48 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:48.605327 | orchestrator | 2026-01-09 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:51.630730 | orchestrator | 2026-01-09 01:05:51 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:51.631185 | orchestrator | 2026-01-09 01:05:51 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:51.631932 | orchestrator | 2026-01-09 01:05:51 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:51.632421 | orchestrator | 2026-01-09 01:05:51 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:51.632482 | orchestrator | 2026-01-09 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:54.669960 | orchestrator | 2026-01-09 01:05:54 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:54.670139 | orchestrator | 2026-01-09 01:05:54 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:54.671585 | orchestrator | 2026-01-09 01:05:54 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:54.671620 | orchestrator | 2026-01-09 01:05:54 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:54.671626 | orchestrator | 2026-01-09 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:05:57.709221 | orchestrator | 2026-01-09 01:05:57 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:05:57.713006 | orchestrator | 2026-01-09 01:05:57 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:05:57.716106 | orchestrator | 2026-01-09 01:05:57 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:05:57.718995 | orchestrator | 2026-01-09 01:05:57 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:05:57.719046 | orchestrator | 2026-01-09 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:00.757373 | orchestrator | 2026-01-09 01:06:00 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:00.757926 | orchestrator | 2026-01-09 01:06:00 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state STARTED 2026-01-09 01:06:00.758718 | orchestrator | 2026-01-09 01:06:00 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state STARTED 2026-01-09 01:06:00.759650 | orchestrator | 2026-01-09 01:06:00 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:00.759676 | orchestrator | 2026-01-09 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:03.795016 | orchestrator | 2026-01-09 01:06:03 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:03.796517 | orchestrator | 2026-01-09 01:06:03 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:03.799918 | orchestrator | 2026-01-09 01:06:03.799987 | orchestrator | 2026-01-09 01:06:03.799995 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:06:03.800002 | orchestrator | 2026-01-09 01:06:03.800009 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:06:03.800016 | orchestrator | Friday 09 January 2026 01:04:38 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-01-09 01:06:03.800022 | orchestrator | ok: [testbed-manager] 2026-01-09 01:06:03.800029 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:06:03.800035 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:06:03.800041 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:06:03.800047 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:06:03.800053 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:06:03.800061 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:06:03.800067 | orchestrator | 2026-01-09 01:06:03.800073 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:06:03.800079 | orchestrator | Friday 09 January 2026 01:04:39 +0000 (0:00:00.855) 0:00:01.121 ******** 2026-01-09 01:06:03.800169 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-09 01:06:03.800179 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-09 01:06:03.800186 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-09 01:06:03.800192 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-09 01:06:03.800198 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-09 01:06:03.800205 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-09 01:06:03.800212 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-09 01:06:03.800218 | orchestrator | 2026-01-09 01:06:03.800225 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-09 01:06:03.800231 | orchestrator | 2026-01-09 01:06:03.800237 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-09 01:06:03.800244 | orchestrator | Friday 09 January 2026 01:04:40 +0000 (0:00:00.654) 0:00:01.776 ******** 2026-01-09 01:06:03.800268 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 01:06:03.800294 | orchestrator | 2026-01-09 01:06:03.800351 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-01-09 01:06:03.800360 | orchestrator | Friday 09 January 2026 01:04:41 +0000 (0:00:01.345) 0:00:03.122 ******** 2026-01-09 01:06:03.800366 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-01-09 01:06:03.800372 | orchestrator | 2026-01-09 01:06:03.800378 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-01-09 01:06:03.800383 | orchestrator | Friday 09 January 2026 01:04:45 +0000 (0:00:03.467) 0:00:06.589 ******** 2026-01-09 01:06:03.800390 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-09 01:06:03.800397 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-09 01:06:03.800403 | orchestrator | 2026-01-09 01:06:03.800409 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-09 01:06:03.800424 | orchestrator | Friday 09 January 2026 01:04:51 +0000 (0:00:06.818) 0:00:13.407 ******** 2026-01-09 01:06:03.800437 | orchestrator | ok: [testbed-manager] => (item=service) 2026-01-09 01:06:03.800444 | orchestrator | 2026-01-09 01:06:03.800450 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-09 01:06:03.800456 | orchestrator | Friday 09 January 2026 01:04:54 +0000 (0:00:02.857) 0:00:16.265 ******** 2026-01-09 01:06:03.800462 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-09 01:06:03.800468 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-01-09 01:06:03.800474 | orchestrator | 2026-01-09 01:06:03.800480 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-09 01:06:03.800494 | orchestrator | Friday 09 January 2026 01:04:58 +0000 (0:00:03.849) 0:00:20.115 ******** 2026-01-09 01:06:03.800500 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-01-09 01:06:03.800527 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-01-09 01:06:03.800534 | orchestrator | 2026-01-09 01:06:03.800550 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-01-09 01:06:03.800556 | orchestrator | Friday 09 January 2026 01:05:06 +0000 (0:00:07.362) 0:00:27.477 ******** 2026-01-09 01:06:03.800562 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-01-09 01:06:03.800569 | orchestrator | 2026-01-09 01:06:03.800575 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:06:03.800581 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:06:03.800589 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:06:03.800603 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:06:03.800609 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:06:03.800615 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:06:03.800636 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:06:03.800651 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:06:03.800658 | orchestrator | 2026-01-09 01:06:03.800664 | orchestrator | 2026-01-09 01:06:03.800671 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:06:03.800685 | orchestrator | Friday 09 January 2026 01:05:11 +0000 (0:00:05.715) 0:00:33.193 ******** 2026-01-09 01:06:03.800692 | orchestrator | =============================================================================== 2026-01-09 01:06:03.800698 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.36s 2026-01-09 01:06:03.800704 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.82s 2026-01-09 01:06:03.800711 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.72s 2026-01-09 01:06:03.800718 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.85s 2026-01-09 01:06:03.800724 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.47s 2026-01-09 01:06:03.800730 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.86s 2026-01-09 01:06:03.800736 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.35s 2026-01-09 01:06:03.800742 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2026-01-09 01:06:03.800748 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-01-09 01:06:03.800754 | orchestrator | 2026-01-09 01:06:03.800760 | orchestrator | 2026-01-09 01:06:03.800766 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:06:03.800772 | orchestrator | 2026-01-09 01:06:03.800809 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:06:03.800816 | orchestrator | Friday 09 January 2026 01:03:51 +0000 (0:00:00.285) 0:00:00.285 ******** 2026-01-09 01:06:03.800822 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:06:03.800828 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:06:03.800833 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:06:03.800840 | orchestrator | 2026-01-09 01:06:03.800859 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:06:03.800866 | orchestrator | Friday 09 January 2026 01:03:51 +0000 (0:00:00.274) 0:00:00.560 ******** 2026-01-09 01:06:03.800872 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-09 01:06:03.800878 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-09 01:06:03.800897 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-09 01:06:03.800904 | orchestrator | 2026-01-09 01:06:03.800922 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-09 01:06:03.800928 | orchestrator | 2026-01-09 01:06:03.800934 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-09 01:06:03.800940 | orchestrator | Friday 09 January 2026 01:03:51 +0000 (0:00:00.473) 0:00:01.034 ******** 2026-01-09 01:06:03.800947 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:06:03.800953 | orchestrator | 2026-01-09 01:06:03.800959 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-09 01:06:03.800972 | orchestrator | Friday 09 January 2026 01:03:53 +0000 (0:00:01.289) 0:00:02.323 ******** 2026-01-09 01:06:03.800978 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-09 01:06:03.800984 | orchestrator | 2026-01-09 01:06:03.800990 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-01-09 01:06:03.800995 | orchestrator | Friday 09 January 2026 01:03:57 +0000 (0:00:04.542) 0:00:06.865 ******** 2026-01-09 01:06:03.801001 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-09 01:06:03.801007 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-09 01:06:03.801016 | orchestrator | 2026-01-09 01:06:03.801025 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-09 01:06:03.801034 | orchestrator | Friday 09 January 2026 01:04:06 +0000 (0:00:08.520) 0:00:15.385 ******** 2026-01-09 01:06:03.801044 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-09 01:06:03.801050 | orchestrator | 2026-01-09 01:06:03.801062 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-09 01:06:03.801069 | orchestrator | Friday 09 January 2026 01:04:10 +0000 (0:00:03.941) 0:00:19.327 ******** 2026-01-09 01:06:03.801075 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-09 01:06:03.801081 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-09 01:06:03.801088 | orchestrator | 2026-01-09 01:06:03.801094 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-09 01:06:03.801100 | orchestrator | Friday 09 January 2026 01:04:14 +0000 (0:00:04.168) 0:00:23.495 ******** 2026-01-09 01:06:03.801106 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-09 01:06:03.801111 | orchestrator | 2026-01-09 01:06:03.801118 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-01-09 01:06:03.801124 | orchestrator | Friday 09 January 2026 01:04:18 +0000 (0:00:04.148) 0:00:27.643 ******** 2026-01-09 01:06:03.801130 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-09 01:06:03.801136 | orchestrator | 2026-01-09 01:06:03.801142 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-09 01:06:03.801148 | orchestrator | Friday 09 January 2026 01:04:22 +0000 (0:00:04.163) 0:00:31.807 ******** 2026-01-09 01:06:03.801154 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.801160 | orchestrator | 2026-01-09 01:06:03.801166 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-09 01:06:03.801178 | orchestrator | Friday 09 January 2026 01:04:26 +0000 (0:00:03.835) 0:00:35.642 ******** 2026-01-09 01:06:03.801185 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.801190 | orchestrator | 2026-01-09 01:06:03.801197 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-09 01:06:03.801202 | orchestrator | Friday 09 January 2026 01:04:30 +0000 (0:00:03.733) 0:00:39.376 ******** 2026-01-09 01:06:03.801209 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.801214 | orchestrator | 2026-01-09 01:06:03.801220 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-09 01:06:03.801226 | orchestrator | Friday 09 January 2026 01:04:34 +0000 (0:00:03.804) 0:00:43.180 ******** 2026-01-09 01:06:03.801248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.801264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.801351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.801363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.801377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.801384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.801395 | orchestrator | 2026-01-09 01:06:03.801402 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-09 01:06:03.801409 | orchestrator | Friday 09 January 2026 01:04:35 +0000 (0:00:01.370) 0:00:44.550 ******** 2026-01-09 01:06:03.801415 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.801457 | orchestrator | 2026-01-09 01:06:03.801479 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-09 01:06:03.801485 | orchestrator | Friday 09 January 2026 01:04:35 +0000 (0:00:00.118) 0:00:44.669 ******** 2026-01-09 01:06:03.801491 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.801496 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.801501 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.801506 | orchestrator | 2026-01-09 01:06:03.801512 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-09 01:06:03.801518 | orchestrator | Friday 09 January 2026 01:04:36 +0000 (0:00:00.464) 0:00:45.133 ******** 2026-01-09 01:06:03.801523 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 01:06:03.801529 | orchestrator | 2026-01-09 01:06:03.801534 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-09 01:06:03.801540 | orchestrator | Friday 09 January 2026 01:04:37 +0000 (0:00:01.116) 0:00:46.250 ******** 2026-01-09 01:06:03.801546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.801556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.801568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.801580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.801586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.801592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.801598 | orchestrator | 2026-01-09 01:06:03.801604 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-09 01:06:03.801639 | orchestrator | Friday 09 January 2026 01:04:40 +0000 (0:00:03.040) 0:00:49.290 ******** 2026-01-09 01:06:03.801647 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:06:03.801654 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:06:03.801660 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:06:03.801666 | orchestrator | 2026-01-09 01:06:03.801672 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-09 01:06:03.801678 | orchestrator | Friday 09 January 2026 01:04:40 +0000 (0:00:00.311) 0:00:49.602 ******** 2026-01-09 01:06:03.801685 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:06:03.801700 | orchestrator | 2026-01-09 01:06:03.801707 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-09 01:06:03.801713 | orchestrator | Friday 09 January 2026 01:04:41 +0000 (0:00:00.589) 0:00:50.192 ******** 2026-01-09 01:06:03.801727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.801762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.801771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.801778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.801789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.801802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.801814 | orchestrator | 2026-01-09 01:06:03.801821 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-09 01:06:03.801827 | orchestrator | Friday 09 January 2026 01:04:43 +0000 (0:00:02.216) 0:00:52.408 ******** 2026-01-09 01:06:03.801834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 01:06:03.801840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:06:03.801847 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.801853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 01:06:03.801866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:06:03.801877 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.801889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 01:06:03.801895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:06:03.801901 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.801907 | orchestrator | 2026-01-09 01:06:03.801914 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-09 01:06:03.801958 | orchestrator | Friday 09 January 2026 01:04:44 +0000 (0:00:01.140) 0:00:53.549 ******** 2026-01-09 01:06:03.801966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 01:06:03.801976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:06:03.801982 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.802612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 01:06:03.802664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:06:03.802674 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.802682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 01:06:03.802689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:06:03.802695 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.802702 | orchestrator | 2026-01-09 01:06:03.802710 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-09 01:06:03.802733 | orchestrator | Friday 09 January 2026 01:04:45 +0000 (0:00:00.968) 0:00:54.518 ******** 2026-01-09 01:06:03.802744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.802772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.802779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.802815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.802822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.802838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.802849 | orchestrator | 2026-01-09 01:06:03.802862 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-09 01:06:03.802869 | orchestrator | Friday 09 January 2026 01:04:48 +0000 (0:00:03.178) 0:00:57.697 ******** 2026-01-09 01:06:03.802887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.802893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.802899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.802906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.802945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.802955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.802962 | orchestrator | 2026-01-09 01:06:03.802968 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-09 01:06:03.802974 | orchestrator | Friday 09 January 2026 01:04:57 +0000 (0:00:08.596) 0:01:06.293 ******** 2026-01-09 01:06:03.802980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 01:06:03.802986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:06:03.802992 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.803021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 01:06:03.803031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:06:03.803037 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.803048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-09 01:06:03.803068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:06:03.803075 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.803081 | orchestrator | 2026-01-09 01:06:03.803087 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-01-09 01:06:03.803093 | orchestrator | Friday 09 January 2026 01:04:59 +0000 (0:00:01.900) 0:01:08.194 ******** 2026-01-09 01:06:03.803099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.803114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.803125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-09 01:06:03.803131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.803137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.803144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:06:03.803155 | orchestrator | 2026-01-09 01:06:03.803160 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-09 01:06:03.803166 | orchestrator | Friday 09 January 2026 01:05:03 +0000 (0:00:04.318) 0:01:12.512 ******** 2026-01-09 01:06:03.803187 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.803194 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.803201 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.803207 | orchestrator | 2026-01-09 01:06:03.803214 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-09 01:06:03.803220 | orchestrator | Friday 09 January 2026 01:05:04 +0000 (0:00:00.981) 0:01:13.494 ******** 2026-01-09 01:06:03.803226 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.803232 | orchestrator | 2026-01-09 01:06:03.803239 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-09 01:06:03.803245 | orchestrator | Friday 09 January 2026 01:05:07 +0000 (0:00:02.757) 0:01:16.252 ******** 2026-01-09 01:06:03.803258 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.803265 | orchestrator | 2026-01-09 01:06:03.803271 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-09 01:06:03.803309 | orchestrator | Friday 09 January 2026 01:05:10 +0000 (0:00:03.084) 0:01:19.336 ******** 2026-01-09 01:06:03.803316 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.803322 | orchestrator | 2026-01-09 01:06:03.803328 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-09 01:06:03.803335 | orchestrator | Friday 09 January 2026 01:05:28 +0000 (0:00:18.112) 0:01:37.448 ******** 2026-01-09 01:06:03.803342 | orchestrator | 2026-01-09 01:06:03.803348 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-09 01:06:03.803355 | orchestrator | Friday 09 January 2026 01:05:28 +0000 (0:00:00.095) 0:01:37.543 ******** 2026-01-09 01:06:03.803361 | orchestrator | 2026-01-09 01:06:03.803367 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-09 01:06:03.803374 | orchestrator | Friday 09 January 2026 01:05:28 +0000 (0:00:00.128) 0:01:37.672 ******** 2026-01-09 01:06:03.803380 | orchestrator | 2026-01-09 01:06:03.803387 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-09 01:06:03.803394 | orchestrator | Friday 09 January 2026 01:05:28 +0000 (0:00:00.093) 0:01:37.766 ******** 2026-01-09 01:06:03.803405 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.803411 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:06:03.803417 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:06:03.803424 | orchestrator | 2026-01-09 01:06:03.803432 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-09 01:06:03.803446 | orchestrator | Friday 09 January 2026 01:05:45 +0000 (0:00:17.145) 0:01:54.911 ******** 2026-01-09 01:06:03.803453 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.803460 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:06:03.803468 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:06:03.803477 | orchestrator | 2026-01-09 01:06:03.803486 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:06:03.803495 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-09 01:06:03.803504 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-09 01:06:03.803512 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-09 01:06:03.803529 | orchestrator | 2026-01-09 01:06:03.803536 | orchestrator | 2026-01-09 01:06:03.803544 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:06:03.803552 | orchestrator | Friday 09 January 2026 01:06:00 +0000 (0:00:14.927) 0:02:09.839 ******** 2026-01-09 01:06:03.803560 | orchestrator | =============================================================================== 2026-01-09 01:06:03.803569 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.11s 2026-01-09 01:06:03.803576 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.15s 2026-01-09 01:06:03.803583 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.93s 2026-01-09 01:06:03.803588 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 8.60s 2026-01-09 01:06:03.803594 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 8.52s 2026-01-09 01:06:03.803600 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.54s 2026-01-09 01:06:03.803607 | orchestrator | magnum : Check magnum containers ---------------------------------------- 4.32s 2026-01-09 01:06:03.803613 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.17s 2026-01-09 01:06:03.803619 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.16s 2026-01-09 01:06:03.803625 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.15s 2026-01-09 01:06:03.803632 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.94s 2026-01-09 01:06:03.803638 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.84s 2026-01-09 01:06:03.803644 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.80s 2026-01-09 01:06:03.803649 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.73s 2026-01-09 01:06:03.803656 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.18s 2026-01-09 01:06:03.803662 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 3.08s 2026-01-09 01:06:03.803668 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.04s 2026-01-09 01:06:03.803674 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.76s 2026-01-09 01:06:03.803679 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.22s 2026-01-09 01:06:03.803685 | orchestrator | magnum : Copying over existing policy file ------------------------------ 1.90s 2026-01-09 01:06:03.803690 | orchestrator | 2026-01-09 01:06:03.803696 | orchestrator | 2026-01-09 01:06:03.803702 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:06:03.803707 | orchestrator | 2026-01-09 01:06:03.803713 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:06:03.803719 | orchestrator | Friday 09 January 2026 01:01:28 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-01-09 01:06:03.803725 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:06:03.803731 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:06:03.803737 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:06:03.803743 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:06:03.803748 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:06:03.803778 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:06:03.803785 | orchestrator | 2026-01-09 01:06:03.803791 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:06:03.803797 | orchestrator | Friday 09 January 2026 01:01:28 +0000 (0:00:00.625) 0:00:00.892 ******** 2026-01-09 01:06:03.803803 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-09 01:06:03.803809 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-09 01:06:03.803815 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-09 01:06:03.803821 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-09 01:06:03.803827 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-09 01:06:03.803838 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-09 01:06:03.803845 | orchestrator | 2026-01-09 01:06:03.803850 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-09 01:06:03.803856 | orchestrator | 2026-01-09 01:06:03.803866 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-09 01:06:03.803872 | orchestrator | Friday 09 January 2026 01:01:29 +0000 (0:00:00.503) 0:00:01.395 ******** 2026-01-09 01:06:03.803878 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 01:06:03.803884 | orchestrator | 2026-01-09 01:06:03.803890 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-09 01:06:03.803902 | orchestrator | Friday 09 January 2026 01:01:30 +0000 (0:00:01.105) 0:00:02.500 ******** 2026-01-09 01:06:03.803908 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:06:03.803914 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:06:03.803920 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:06:03.803926 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:06:03.803932 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:06:03.803938 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:06:03.803943 | orchestrator | 2026-01-09 01:06:03.803949 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-09 01:06:03.803955 | orchestrator | Friday 09 January 2026 01:01:31 +0000 (0:00:01.208) 0:00:03.709 ******** 2026-01-09 01:06:03.803961 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:06:03.803967 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:06:03.803973 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:06:03.803987 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:06:03.803994 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:06:03.804000 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:06:03.804007 | orchestrator | 2026-01-09 01:06:03.804013 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-09 01:06:03.804020 | orchestrator | Friday 09 January 2026 01:01:32 +0000 (0:00:01.072) 0:00:04.782 ******** 2026-01-09 01:06:03.804026 | orchestrator | ok: [testbed-node-0] => { 2026-01-09 01:06:03.804033 | orchestrator |  "changed": false, 2026-01-09 01:06:03.804039 | orchestrator |  "msg": "All assertions passed" 2026-01-09 01:06:03.804046 | orchestrator | } 2026-01-09 01:06:03.804053 | orchestrator | ok: [testbed-node-1] => { 2026-01-09 01:06:03.804060 | orchestrator |  "changed": false, 2026-01-09 01:06:03.804066 | orchestrator |  "msg": "All assertions passed" 2026-01-09 01:06:03.804072 | orchestrator | } 2026-01-09 01:06:03.804079 | orchestrator | ok: [testbed-node-2] => { 2026-01-09 01:06:03.804085 | orchestrator |  "changed": false, 2026-01-09 01:06:03.804091 | orchestrator |  "msg": "All assertions passed" 2026-01-09 01:06:03.804097 | orchestrator | } 2026-01-09 01:06:03.804103 | orchestrator | ok: [testbed-node-3] => { 2026-01-09 01:06:03.804109 | orchestrator |  "changed": false, 2026-01-09 01:06:03.804115 | orchestrator |  "msg": "All assertions passed" 2026-01-09 01:06:03.804120 | orchestrator | } 2026-01-09 01:06:03.804126 | orchestrator | ok: [testbed-node-4] => { 2026-01-09 01:06:03.804132 | orchestrator |  "changed": false, 2026-01-09 01:06:03.804138 | orchestrator |  "msg": "All assertions passed" 2026-01-09 01:06:03.804145 | orchestrator | } 2026-01-09 01:06:03.804151 | orchestrator | ok: [testbed-node-5] => { 2026-01-09 01:06:03.804156 | orchestrator |  "changed": false, 2026-01-09 01:06:03.804162 | orchestrator |  "msg": "All assertions passed" 2026-01-09 01:06:03.804168 | orchestrator | } 2026-01-09 01:06:03.804174 | orchestrator | 2026-01-09 01:06:03.804181 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-09 01:06:03.804187 | orchestrator | Friday 09 January 2026 01:01:33 +0000 (0:00:00.675) 0:00:05.457 ******** 2026-01-09 01:06:03.804193 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.804232 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.804240 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.804264 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.804270 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.804324 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.804332 | orchestrator | 2026-01-09 01:06:03.804338 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-09 01:06:03.804345 | orchestrator | Friday 09 January 2026 01:01:34 +0000 (0:00:00.566) 0:00:06.024 ******** 2026-01-09 01:06:03.804351 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-09 01:06:03.804358 | orchestrator | 2026-01-09 01:06:03.804364 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-01-09 01:06:03.804378 | orchestrator | Friday 09 January 2026 01:01:37 +0000 (0:00:03.840) 0:00:09.864 ******** 2026-01-09 01:06:03.804386 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-09 01:06:03.804393 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-09 01:06:03.804399 | orchestrator | 2026-01-09 01:06:03.804406 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-09 01:06:03.804412 | orchestrator | Friday 09 January 2026 01:01:45 +0000 (0:00:07.416) 0:00:17.281 ******** 2026-01-09 01:06:03.804419 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-09 01:06:03.804425 | orchestrator | 2026-01-09 01:06:03.804432 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-09 01:06:03.804438 | orchestrator | Friday 09 January 2026 01:01:48 +0000 (0:00:03.497) 0:00:20.778 ******** 2026-01-09 01:06:03.804444 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-09 01:06:03.804456 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-09 01:06:03.804462 | orchestrator | 2026-01-09 01:06:03.804468 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-09 01:06:03.804474 | orchestrator | Friday 09 January 2026 01:01:53 +0000 (0:00:04.265) 0:00:25.044 ******** 2026-01-09 01:06:03.804481 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-09 01:06:03.804487 | orchestrator | 2026-01-09 01:06:03.804493 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-01-09 01:06:03.804500 | orchestrator | Friday 09 January 2026 01:01:56 +0000 (0:00:03.752) 0:00:28.797 ******** 2026-01-09 01:06:03.804505 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-09 01:06:03.804512 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-09 01:06:03.804518 | orchestrator | 2026-01-09 01:06:03.804524 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-09 01:06:03.804531 | orchestrator | Friday 09 January 2026 01:02:05 +0000 (0:00:08.348) 0:00:37.145 ******** 2026-01-09 01:06:03.804537 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.804543 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.804549 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.804555 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.804562 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.804568 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.804575 | orchestrator | 2026-01-09 01:06:03.804582 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-09 01:06:03.804597 | orchestrator | Friday 09 January 2026 01:02:06 +0000 (0:00:00.871) 0:00:38.016 ******** 2026-01-09 01:06:03.804604 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.804610 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.804616 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.804622 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.804629 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.804635 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.804642 | orchestrator | 2026-01-09 01:06:03.804648 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-09 01:06:03.804661 | orchestrator | Friday 09 January 2026 01:02:08 +0000 (0:00:02.715) 0:00:40.732 ******** 2026-01-09 01:06:03.804668 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:06:03.804675 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:06:03.804682 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:06:03.804689 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:06:03.804696 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:06:03.804702 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:06:03.804709 | orchestrator | 2026-01-09 01:06:03.804715 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-09 01:06:03.804721 | orchestrator | Friday 09 January 2026 01:02:09 +0000 (0:00:01.192) 0:00:41.924 ******** 2026-01-09 01:06:03.804727 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.804734 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.804742 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.804748 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.804755 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.804761 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.804767 | orchestrator | 2026-01-09 01:06:03.804773 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-09 01:06:03.804780 | orchestrator | Friday 09 January 2026 01:02:13 +0000 (0:00:03.802) 0:00:45.727 ******** 2026-01-09 01:06:03.804788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.804796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.804808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.804821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.804834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.804841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.804847 | orchestrator | 2026-01-09 01:06:03.804854 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-09 01:06:03.804861 | orchestrator | Friday 09 January 2026 01:02:16 +0000 (0:00:03.174) 0:00:48.902 ******** 2026-01-09 01:06:03.804868 | orchestrator | [WARNING]: Skipped 2026-01-09 01:06:03.804875 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-09 01:06:03.804882 | orchestrator | due to this access issue: 2026-01-09 01:06:03.804888 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-09 01:06:03.804894 | orchestrator | a directory 2026-01-09 01:06:03.804900 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 01:06:03.804906 | orchestrator | 2026-01-09 01:06:03.804913 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-09 01:06:03.804919 | orchestrator | Friday 09 January 2026 01:02:17 +0000 (0:00:00.819) 0:00:49.721 ******** 2026-01-09 01:06:03.804926 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 01:06:03.804934 | orchestrator | 2026-01-09 01:06:03.804940 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-09 01:06:03.804947 | orchestrator | Friday 09 January 2026 01:02:19 +0000 (0:00:01.222) 0:00:50.944 ******** 2026-01-09 01:06:03.804956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.804975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.804983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.804990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.804997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.805007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.805018 | orchestrator | 2026-01-09 01:06:03.805025 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-09 01:06:03.805031 | orchestrator | Friday 09 January 2026 01:02:22 +0000 (0:00:03.717) 0:00:54.661 ******** 2026-01-09 01:06:03.805043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.805050 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.805057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.805063 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.805070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.805077 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.805088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.805100 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.805110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.805117 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.805124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.805131 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.805136 | orchestrator | 2026-01-09 01:06:03.805143 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-09 01:06:03.805149 | orchestrator | Friday 09 January 2026 01:02:26 +0000 (0:00:03.705) 0:00:58.367 ******** 2026-01-09 01:06:03.805155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.805161 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.805168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.805182 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.805191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.805198 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.805237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.805245 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.805251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.805258 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.805264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.805271 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.805290 | orchestrator | 2026-01-09 01:06:03.805297 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-09 01:06:03.805310 | orchestrator | Friday 09 January 2026 01:02:30 +0000 (0:00:03.646) 0:01:02.013 ******** 2026-01-09 01:06:03.805317 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.805323 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.805330 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.805336 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.805343 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.805350 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.805357 | orchestrator | 2026-01-09 01:06:03.805364 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-09 01:06:03.805370 | orchestrator | Friday 09 January 2026 01:02:32 +0000 (0:00:02.671) 0:01:04.685 ******** 2026-01-09 01:06:03.805377 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.805383 | orchestrator | 2026-01-09 01:06:03.805389 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-09 01:06:03.805395 | orchestrator | Friday 09 January 2026 01:02:32 +0000 (0:00:00.207) 0:01:04.892 ******** 2026-01-09 01:06:03.805401 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.805407 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.805413 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.805419 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.805426 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.805435 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.805442 | orchestrator | 2026-01-09 01:06:03.805449 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-09 01:06:03.805455 | orchestrator | Friday 09 January 2026 01:02:33 +0000 (0:00:00.635) 0:01:05.528 ******** 2026-01-09 01:06:03.805467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.805474 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.805480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.805487 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.805492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.805504 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.805510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_po2026-01-09 01:06:03 | INFO  | Task a92714c0-243f-43a3-97bc-b8e3a29f31c1 is in state SUCCESS 2026-01-09 01:06:03.805519 | orchestrator | 2026-01-09 01:06:03 | INFO  | Task 8794d6dc-c477-4e2b-9a8d-07f8b309b438 is in state SUCCESS 2026-01-09 01:06:03.805526 | orchestrator | rt neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.805533 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.805542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.805549 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.805562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.805569 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.805575 | orchestrator | 2026-01-09 01:06:03.805582 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-09 01:06:03.805588 | orchestrator | Friday 09 January 2026 01:02:37 +0000 (0:00:03.506) 0:01:09.034 ******** 2026-01-09 01:06:03.805595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.805607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.805933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.805956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.805963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.805969 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.805982 | orchestrator | 2026-01-09 01:06:03.805988 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-09 01:06:03.805994 | orchestrator | Friday 09 January 2026 01:02:41 +0000 (0:00:04.539) 0:01:13.574 ******** 2026-01-09 01:06:03.806000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.806038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.806047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.806053 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.806063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.806070 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.806076 | orchestrator | 2026-01-09 01:06:03.806083 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-09 01:06:03.806088 | orchestrator | Friday 09 January 2026 01:02:49 +0000 (0:00:08.323) 0:01:21.897 ******** 2026-01-09 01:06:03.806103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.806109 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.806116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.806122 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.806128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.806138 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.806144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.806150 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.806156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.806162 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.806176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.806182 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.806188 | orchestrator | 2026-01-09 01:06:03.806194 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-09 01:06:03.806200 | orchestrator | Friday 09 January 2026 01:02:53 +0000 (0:00:03.421) 0:01:25.319 ******** 2026-01-09 01:06:03.806206 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.806212 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.806218 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:06:03.806223 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:06:03.806229 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.806235 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.806246 | orchestrator | 2026-01-09 01:06:03.806252 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-09 01:06:03.806258 | orchestrator | Friday 09 January 2026 01:02:56 +0000 (0:00:03.526) 0:01:28.845 ******** 2026-01-09 01:06:03.806264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.806270 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.806290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.806297 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.806303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.806309 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.806325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.806332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.806342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.806349 | orchestrator | 2026-01-09 01:06:03.806354 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-01-09 01:06:03.806360 | orchestrator | Friday 09 January 2026 01:03:01 +0000 (0:00:04.183) 0:01:33.029 ******** 2026-01-09 01:06:03.806367 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.806372 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.806378 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.806384 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.806390 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.806396 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.806402 | orchestrator | 2026-01-09 01:06:03.806408 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-09 01:06:03.806414 | orchestrator | Friday 09 January 2026 01:03:03 +0000 (0:00:02.623) 0:01:35.652 ******** 2026-01-09 01:06:03.806420 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.806426 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.806432 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.806438 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.806444 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.806451 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.806457 | orchestrator | 2026-01-09 01:06:03.806463 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-09 01:06:03.806469 | orchestrator | Friday 09 January 2026 01:03:06 +0000 (0:00:02.593) 0:01:38.246 ******** 2026-01-09 01:06:03.806475 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.806481 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.806486 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.806492 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.806498 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.806504 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.806511 | orchestrator | 2026-01-09 01:06:03.806516 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-09 01:06:03.806522 | orchestrator | Friday 09 January 2026 01:03:09 +0000 (0:00:02.930) 0:01:41.177 ******** 2026-01-09 01:06:03.806527 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.806532 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.806538 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.806543 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.806549 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.806555 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.806561 | orchestrator | 2026-01-09 01:06:03.806570 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-09 01:06:03.806576 | orchestrator | Friday 09 January 2026 01:03:11 +0000 (0:00:02.738) 0:01:43.915 ******** 2026-01-09 01:06:03.806582 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.806588 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.806594 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.806601 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.806612 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.806618 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.806625 | orchestrator | 2026-01-09 01:06:03.806630 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-09 01:06:03.806638 | orchestrator | Friday 09 January 2026 01:03:14 +0000 (0:00:02.124) 0:01:46.039 ******** 2026-01-09 01:06:03.806644 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.806650 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.806658 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.806668 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.806678 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.806688 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.806698 | orchestrator | 2026-01-09 01:06:03.806707 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-09 01:06:03.806717 | orchestrator | Friday 09 January 2026 01:03:16 +0000 (0:00:02.123) 0:01:48.163 ******** 2026-01-09 01:06:03.806727 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-09 01:06:03.806736 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.806746 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-09 01:06:03.806755 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.806765 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-09 01:06:03.806774 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.806783 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-09 01:06:03.806791 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.806801 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-09 01:06:03.806811 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.806817 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-09 01:06:03.806822 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.806828 | orchestrator | 2026-01-09 01:06:03.806835 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-09 01:06:03.806841 | orchestrator | Friday 09 January 2026 01:03:18 +0000 (0:00:01.811) 0:01:49.975 ******** 2026-01-09 01:06:03.806848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.806855 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.806862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.806875 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.806890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.806897 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.806903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.806910 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.806916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.806922 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.806928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.806939 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.806946 | orchestrator | 2026-01-09 01:06:03.806952 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-09 01:06:03.806957 | orchestrator | Friday 09 January 2026 01:03:19 +0000 (0:00:01.915) 0:01:51.890 ******** 2026-01-09 01:06:03.806963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.806968 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.806981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.806988 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.806994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.807000 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.807007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.807017 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.807023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.807029 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.807035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.807042 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.807048 | orchestrator | 2026-01-09 01:06:03.807055 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-09 01:06:03.807061 | orchestrator | Friday 09 January 2026 01:03:22 +0000 (0:00:02.723) 0:01:54.614 ******** 2026-01-09 01:06:03.807067 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.807077 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.807084 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.807090 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.807098 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.807105 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.807112 | orchestrator | 2026-01-09 01:06:03.807121 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-09 01:06:03.807127 | orchestrator | Friday 09 January 2026 01:03:24 +0000 (0:00:01.917) 0:01:56.531 ******** 2026-01-09 01:06:03.807133 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.807139 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.807145 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.807151 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:06:03.807157 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:06:03.807162 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:06:03.807168 | orchestrator | 2026-01-09 01:06:03.807174 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-09 01:06:03.807179 | orchestrator | Friday 09 January 2026 01:03:27 +0000 (0:00:03.262) 0:01:59.793 ******** 2026-01-09 01:06:03.807185 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.807191 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.807197 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.807203 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.807209 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.807215 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.807222 | orchestrator | 2026-01-09 01:06:03.807229 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-09 01:06:03.807235 | orchestrator | Friday 09 January 2026 01:03:30 +0000 (0:00:02.445) 0:02:02.239 ******** 2026-01-09 01:06:03.807241 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.807252 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.807259 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.807266 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.807272 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.807320 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.807327 | orchestrator | 2026-01-09 01:06:03.807333 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-09 01:06:03.807340 | orchestrator | Friday 09 January 2026 01:03:34 +0000 (0:00:03.901) 0:02:06.141 ******** 2026-01-09 01:06:03.807347 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.807353 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.807360 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.807366 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.807374 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.807380 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.807387 | orchestrator | 2026-01-09 01:06:03.807394 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-09 01:06:03.807401 | orchestrator | Friday 09 January 2026 01:03:37 +0000 (0:00:02.962) 0:02:09.104 ******** 2026-01-09 01:06:03.807408 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.807416 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.807423 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.807430 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.807437 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.807444 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.807452 | orchestrator | 2026-01-09 01:06:03.807459 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-09 01:06:03.807466 | orchestrator | Friday 09 January 2026 01:03:39 +0000 (0:00:02.622) 0:02:11.726 ******** 2026-01-09 01:06:03.807473 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.807479 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.807486 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.807493 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.807501 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.807508 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.807514 | orchestrator | 2026-01-09 01:06:03.807522 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-09 01:06:03.807529 | orchestrator | Friday 09 January 2026 01:03:42 +0000 (0:00:02.382) 0:02:14.109 ******** 2026-01-09 01:06:03.807536 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.807543 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.807550 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.807556 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.807563 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.807570 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.807578 | orchestrator | 2026-01-09 01:06:03.807585 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-09 01:06:03.807592 | orchestrator | Friday 09 January 2026 01:03:46 +0000 (0:00:04.377) 0:02:18.486 ******** 2026-01-09 01:06:03.807599 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.807606 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.807614 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.807622 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.807629 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.807637 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.807645 | orchestrator | 2026-01-09 01:06:03.807653 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-09 01:06:03.807661 | orchestrator | Friday 09 January 2026 01:03:49 +0000 (0:00:02.757) 0:02:21.244 ******** 2026-01-09 01:06:03.807669 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-09 01:06:03.807677 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.807690 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-09 01:06:03.807697 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.807705 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-09 01:06:03.807712 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.807720 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-09 01:06:03.807727 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.807741 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-09 01:06:03.807749 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.807902 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-09 01:06:03.807988 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.807995 | orchestrator | 2026-01-09 01:06:03.808000 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-09 01:06:03.808005 | orchestrator | Friday 09 January 2026 01:03:51 +0000 (0:00:02.200) 0:02:23.444 ******** 2026-01-09 01:06:03.808012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.808021 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.808029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.808036 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.808043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.808050 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.808074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-09 01:06:03.808079 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.808101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.808106 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.808110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-09 01:06:03.808114 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.808118 | orchestrator | 2026-01-09 01:06:03.808122 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-01-09 01:06:03.808126 | orchestrator | Friday 09 January 2026 01:03:53 +0000 (0:00:02.375) 0:02:25.820 ******** 2026-01-09 01:06:03.808130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.808134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.808148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-09 01:06:03.808155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.808160 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.808164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-09 01:06:03.808168 | orchestrator | 2026-01-09 01:06:03.808172 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-09 01:06:03.808176 | orchestrator | Friday 09 January 2026 01:03:57 +0000 (0:00:04.041) 0:02:29.861 ******** 2026-01-09 01:06:03.808183 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:06:03.808186 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:06:03.808190 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:06:03.808194 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:06:03.808198 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:06:03.808202 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:06:03.808205 | orchestrator | 2026-01-09 01:06:03.808209 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-09 01:06:03.808213 | orchestrator | Friday 09 January 2026 01:03:58 +0000 (0:00:00.545) 0:02:30.407 ******** 2026-01-09 01:06:03.808217 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.808220 | orchestrator | 2026-01-09 01:06:03.808224 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-09 01:06:03.808228 | orchestrator | Friday 09 January 2026 01:04:01 +0000 (0:00:02.667) 0:02:33.075 ******** 2026-01-09 01:06:03.808232 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.808236 | orchestrator | 2026-01-09 01:06:03.808239 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-09 01:06:03.808243 | orchestrator | Friday 09 January 2026 01:04:03 +0000 (0:00:02.830) 0:02:35.905 ******** 2026-01-09 01:06:03.808247 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.808251 | orchestrator | 2026-01-09 01:06:03.808254 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-09 01:06:03.808258 | orchestrator | Friday 09 January 2026 01:04:47 +0000 (0:00:43.557) 0:03:19.463 ******** 2026-01-09 01:06:03.808262 | orchestrator | 2026-01-09 01:06:03.808266 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-09 01:06:03.808270 | orchestrator | Friday 09 January 2026 01:04:47 +0000 (0:00:00.088) 0:03:19.552 ******** 2026-01-09 01:06:03.808303 | orchestrator | 2026-01-09 01:06:03.808309 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-09 01:06:03.808312 | orchestrator | Friday 09 January 2026 01:04:48 +0000 (0:00:00.499) 0:03:20.052 ******** 2026-01-09 01:06:03.808316 | orchestrator | 2026-01-09 01:06:03.808320 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-09 01:06:03.808324 | orchestrator | Friday 09 January 2026 01:04:48 +0000 (0:00:00.077) 0:03:20.129 ******** 2026-01-09 01:06:03.808327 | orchestrator | 2026-01-09 01:06:03.808335 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-09 01:06:03.808339 | orchestrator | Friday 09 January 2026 01:04:48 +0000 (0:00:00.076) 0:03:20.206 ******** 2026-01-09 01:06:03.808343 | orchestrator | 2026-01-09 01:06:03.808349 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-09 01:06:03.808353 | orchestrator | Friday 09 January 2026 01:04:48 +0000 (0:00:00.119) 0:03:20.325 ******** 2026-01-09 01:06:03.808357 | orchestrator | 2026-01-09 01:06:03.808361 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-09 01:06:03.808364 | orchestrator | Friday 09 January 2026 01:04:48 +0000 (0:00:00.092) 0:03:20.417 ******** 2026-01-09 01:06:03.808368 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:06:03.808372 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:06:03.808376 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:06:03.808380 | orchestrator | 2026-01-09 01:06:03.808383 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-09 01:06:03.808387 | orchestrator | Friday 09 January 2026 01:05:16 +0000 (0:00:28.461) 0:03:48.878 ******** 2026-01-09 01:06:03.808391 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:06:03.808395 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:06:03.808398 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:06:03.808402 | orchestrator | 2026-01-09 01:06:03.808406 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:06:03.808410 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-09 01:06:03.808418 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-09 01:06:03.808422 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-09 01:06:03.808426 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-09 01:06:03.808430 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-09 01:06:03.808434 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-09 01:06:03.808437 | orchestrator | 2026-01-09 01:06:03.808441 | orchestrator | 2026-01-09 01:06:03.808445 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:06:03.808449 | orchestrator | Friday 09 January 2026 01:06:02 +0000 (0:00:45.133) 0:04:34.012 ******** 2026-01-09 01:06:03.808453 | orchestrator | =============================================================================== 2026-01-09 01:06:03.808456 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 45.13s 2026-01-09 01:06:03.808460 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.56s 2026-01-09 01:06:03.808464 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.46s 2026-01-09 01:06:03.808468 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.35s 2026-01-09 01:06:03.808471 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.32s 2026-01-09 01:06:03.808475 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.42s 2026-01-09 01:06:03.808479 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.54s 2026-01-09 01:06:03.808483 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 4.38s 2026-01-09 01:06:03.808486 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.27s 2026-01-09 01:06:03.808490 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.18s 2026-01-09 01:06:03.808494 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.04s 2026-01-09 01:06:03.808498 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.90s 2026-01-09 01:06:03.808501 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.84s 2026-01-09 01:06:03.808505 | orchestrator | Setting sysctl values --------------------------------------------------- 3.80s 2026-01-09 01:06:03.808509 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.75s 2026-01-09 01:06:03.808513 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.72s 2026-01-09 01:06:03.808516 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.71s 2026-01-09 01:06:03.808521 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.65s 2026-01-09 01:06:03.808524 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.53s 2026-01-09 01:06:03.808528 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.51s 2026-01-09 01:06:03.808532 | orchestrator | 2026-01-09 01:06:03 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:03.808536 | orchestrator | 2026-01-09 01:06:03 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:03.808540 | orchestrator | 2026-01-09 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:06.835613 | orchestrator | 2026-01-09 01:06:06 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:06.836321 | orchestrator | 2026-01-09 01:06:06 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:06.837299 | orchestrator | 2026-01-09 01:06:06 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:06.837982 | orchestrator | 2026-01-09 01:06:06 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:06.838007 | orchestrator | 2026-01-09 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:09.874627 | orchestrator | 2026-01-09 01:06:09 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:09.876117 | orchestrator | 2026-01-09 01:06:09 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:09.877693 | orchestrator | 2026-01-09 01:06:09 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:09.880338 | orchestrator | 2026-01-09 01:06:09 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:09.880547 | orchestrator | 2026-01-09 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:12.910348 | orchestrator | 2026-01-09 01:06:12 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:12.910985 | orchestrator | 2026-01-09 01:06:12 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:12.912155 | orchestrator | 2026-01-09 01:06:12 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:12.912817 | orchestrator | 2026-01-09 01:06:12 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:12.913673 | orchestrator | 2026-01-09 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:15.954400 | orchestrator | 2026-01-09 01:06:15 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:15.958192 | orchestrator | 2026-01-09 01:06:15 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:15.958258 | orchestrator | 2026-01-09 01:06:15 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:15.958267 | orchestrator | 2026-01-09 01:06:15 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:15.958275 | orchestrator | 2026-01-09 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:19.022235 | orchestrator | 2026-01-09 01:06:19 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:19.023720 | orchestrator | 2026-01-09 01:06:19 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:19.025080 | orchestrator | 2026-01-09 01:06:19 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:19.026229 | orchestrator | 2026-01-09 01:06:19 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:19.026281 | orchestrator | 2026-01-09 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:22.069420 | orchestrator | 2026-01-09 01:06:22 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:22.070421 | orchestrator | 2026-01-09 01:06:22 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:22.073610 | orchestrator | 2026-01-09 01:06:22 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:22.073642 | orchestrator | 2026-01-09 01:06:22 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:22.073669 | orchestrator | 2026-01-09 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:25.119081 | orchestrator | 2026-01-09 01:06:25 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:25.119724 | orchestrator | 2026-01-09 01:06:25 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:25.121264 | orchestrator | 2026-01-09 01:06:25 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:25.122694 | orchestrator | 2026-01-09 01:06:25 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:25.122730 | orchestrator | 2026-01-09 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:28.162786 | orchestrator | 2026-01-09 01:06:28 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:28.163287 | orchestrator | 2026-01-09 01:06:28 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:28.164263 | orchestrator | 2026-01-09 01:06:28 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:28.165451 | orchestrator | 2026-01-09 01:06:28 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:28.165497 | orchestrator | 2026-01-09 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:31.200891 | orchestrator | 2026-01-09 01:06:31 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:31.202802 | orchestrator | 2026-01-09 01:06:31 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:31.206664 | orchestrator | 2026-01-09 01:06:31 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:31.209544 | orchestrator | 2026-01-09 01:06:31 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:31.209632 | orchestrator | 2026-01-09 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:34.251875 | orchestrator | 2026-01-09 01:06:34 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:34.255026 | orchestrator | 2026-01-09 01:06:34 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:34.258652 | orchestrator | 2026-01-09 01:06:34 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:34.261104 | orchestrator | 2026-01-09 01:06:34 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:34.261156 | orchestrator | 2026-01-09 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:37.300353 | orchestrator | 2026-01-09 01:06:37 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:37.301177 | orchestrator | 2026-01-09 01:06:37 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:37.303135 | orchestrator | 2026-01-09 01:06:37 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:37.305188 | orchestrator | 2026-01-09 01:06:37 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:37.305241 | orchestrator | 2026-01-09 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:40.335655 | orchestrator | 2026-01-09 01:06:40 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:40.338255 | orchestrator | 2026-01-09 01:06:40 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:40.338357 | orchestrator | 2026-01-09 01:06:40 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:40.338393 | orchestrator | 2026-01-09 01:06:40 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:40.338399 | orchestrator | 2026-01-09 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:43.366002 | orchestrator | 2026-01-09 01:06:43 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:43.368282 | orchestrator | 2026-01-09 01:06:43 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:43.371698 | orchestrator | 2026-01-09 01:06:43 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:43.372388 | orchestrator | 2026-01-09 01:06:43 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:43.372422 | orchestrator | 2026-01-09 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:46.399021 | orchestrator | 2026-01-09 01:06:46 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:46.401337 | orchestrator | 2026-01-09 01:06:46 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:46.401922 | orchestrator | 2026-01-09 01:06:46 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:46.402766 | orchestrator | 2026-01-09 01:06:46 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:46.402791 | orchestrator | 2026-01-09 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:49.445749 | orchestrator | 2026-01-09 01:06:49 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:49.447471 | orchestrator | 2026-01-09 01:06:49 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:49.449905 | orchestrator | 2026-01-09 01:06:49 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:49.452555 | orchestrator | 2026-01-09 01:06:49 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:49.452609 | orchestrator | 2026-01-09 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:52.485461 | orchestrator | 2026-01-09 01:06:52 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:52.485626 | orchestrator | 2026-01-09 01:06:52 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:52.485646 | orchestrator | 2026-01-09 01:06:52 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:52.486494 | orchestrator | 2026-01-09 01:06:52 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:52.486513 | orchestrator | 2026-01-09 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:55.517262 | orchestrator | 2026-01-09 01:06:55 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:55.522009 | orchestrator | 2026-01-09 01:06:55 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:55.522652 | orchestrator | 2026-01-09 01:06:55 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:55.524175 | orchestrator | 2026-01-09 01:06:55 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:55.524215 | orchestrator | 2026-01-09 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:06:58.553718 | orchestrator | 2026-01-09 01:06:58 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:06:58.554130 | orchestrator | 2026-01-09 01:06:58 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:06:58.557702 | orchestrator | 2026-01-09 01:06:58 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:06:58.559118 | orchestrator | 2026-01-09 01:06:58 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:06:58.559784 | orchestrator | 2026-01-09 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:01.602920 | orchestrator | 2026-01-09 01:07:01 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:01.602979 | orchestrator | 2026-01-09 01:07:01 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:01.602986 | orchestrator | 2026-01-09 01:07:01 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:01.602992 | orchestrator | 2026-01-09 01:07:01 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:01.602998 | orchestrator | 2026-01-09 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:04.626797 | orchestrator | 2026-01-09 01:07:04 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:04.627979 | orchestrator | 2026-01-09 01:07:04 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:04.629222 | orchestrator | 2026-01-09 01:07:04 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:04.630390 | orchestrator | 2026-01-09 01:07:04 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:04.630660 | orchestrator | 2026-01-09 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:07.670137 | orchestrator | 2026-01-09 01:07:07 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:07.672297 | orchestrator | 2026-01-09 01:07:07 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:07.675176 | orchestrator | 2026-01-09 01:07:07 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:07.677928 | orchestrator | 2026-01-09 01:07:07 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:07.677983 | orchestrator | 2026-01-09 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:10.717159 | orchestrator | 2026-01-09 01:07:10 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:10.718635 | orchestrator | 2026-01-09 01:07:10 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:10.720424 | orchestrator | 2026-01-09 01:07:10 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:10.721324 | orchestrator | 2026-01-09 01:07:10 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:10.721381 | orchestrator | 2026-01-09 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:13.773255 | orchestrator | 2026-01-09 01:07:13 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:13.777628 | orchestrator | 2026-01-09 01:07:13 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:13.778647 | orchestrator | 2026-01-09 01:07:13 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:13.778747 | orchestrator | 2026-01-09 01:07:13 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:13.778901 | orchestrator | 2026-01-09 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:16.816091 | orchestrator | 2026-01-09 01:07:16 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:16.818201 | orchestrator | 2026-01-09 01:07:16 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:16.821089 | orchestrator | 2026-01-09 01:07:16 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:16.822896 | orchestrator | 2026-01-09 01:07:16 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:16.823301 | orchestrator | 2026-01-09 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:19.864017 | orchestrator | 2026-01-09 01:07:19 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:19.864493 | orchestrator | 2026-01-09 01:07:19 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:19.865372 | orchestrator | 2026-01-09 01:07:19 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:19.866981 | orchestrator | 2026-01-09 01:07:19 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:19.867530 | orchestrator | 2026-01-09 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:22.914250 | orchestrator | 2026-01-09 01:07:22 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:22.916641 | orchestrator | 2026-01-09 01:07:22 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:22.918567 | orchestrator | 2026-01-09 01:07:22 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:22.920122 | orchestrator | 2026-01-09 01:07:22 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:22.920171 | orchestrator | 2026-01-09 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:25.975733 | orchestrator | 2026-01-09 01:07:25 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:25.976990 | orchestrator | 2026-01-09 01:07:25 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:25.977938 | orchestrator | 2026-01-09 01:07:25 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:25.980627 | orchestrator | 2026-01-09 01:07:25 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:25.980695 | orchestrator | 2026-01-09 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:29.030946 | orchestrator | 2026-01-09 01:07:29 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:29.031784 | orchestrator | 2026-01-09 01:07:29 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:29.034106 | orchestrator | 2026-01-09 01:07:29 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:29.036178 | orchestrator | 2026-01-09 01:07:29 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:29.036519 | orchestrator | 2026-01-09 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:32.089184 | orchestrator | 2026-01-09 01:07:32 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:32.089344 | orchestrator | 2026-01-09 01:07:32 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:32.090983 | orchestrator | 2026-01-09 01:07:32 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:32.093429 | orchestrator | 2026-01-09 01:07:32 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:32.093554 | orchestrator | 2026-01-09 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:35.186324 | orchestrator | 2026-01-09 01:07:35 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:35.186476 | orchestrator | 2026-01-09 01:07:35 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:35.186483 | orchestrator | 2026-01-09 01:07:35 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:35.186488 | orchestrator | 2026-01-09 01:07:35 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:35.186493 | orchestrator | 2026-01-09 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:38.219724 | orchestrator | 2026-01-09 01:07:38 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:38.220616 | orchestrator | 2026-01-09 01:07:38 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:38.222272 | orchestrator | 2026-01-09 01:07:38 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:38.223052 | orchestrator | 2026-01-09 01:07:38 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:38.223204 | orchestrator | 2026-01-09 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:41.268091 | orchestrator | 2026-01-09 01:07:41 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:41.270564 | orchestrator | 2026-01-09 01:07:41 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:41.272825 | orchestrator | 2026-01-09 01:07:41 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:41.275653 | orchestrator | 2026-01-09 01:07:41 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:41.275722 | orchestrator | 2026-01-09 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:44.327200 | orchestrator | 2026-01-09 01:07:44 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state STARTED 2026-01-09 01:07:44.327788 | orchestrator | 2026-01-09 01:07:44 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:44.328425 | orchestrator | 2026-01-09 01:07:44 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:44.330411 | orchestrator | 2026-01-09 01:07:44 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:44.330451 | orchestrator | 2026-01-09 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:47.388844 | orchestrator | 2026-01-09 01:07:47 | INFO  | Task d7ef6f37-937a-49db-b6ca-24efd2cd0a85 is in state SUCCESS 2026-01-09 01:07:47.390604 | orchestrator | 2026-01-09 01:07:47.390686 | orchestrator | 2026-01-09 01:07:47.390697 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:07:47.390705 | orchestrator | 2026-01-09 01:07:47.390712 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:07:47.390719 | orchestrator | Friday 09 January 2026 01:04:28 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-01-09 01:07:47.390726 | orchestrator | ok: [testbed-manager] 2026-01-09 01:07:47.390734 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:07:47.390741 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:07:47.390747 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:07:47.390753 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:07:47.390760 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:07:47.390793 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:07:47.390800 | orchestrator | 2026-01-09 01:07:47.390806 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:07:47.390813 | orchestrator | Friday 09 January 2026 01:04:29 +0000 (0:00:00.715) 0:00:00.974 ******** 2026-01-09 01:07:47.390821 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-09 01:07:47.390826 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-09 01:07:47.391177 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-09 01:07:47.391192 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-09 01:07:47.391198 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-09 01:07:47.391204 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-09 01:07:47.391211 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-09 01:07:47.391216 | orchestrator | 2026-01-09 01:07:47.391222 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-09 01:07:47.391228 | orchestrator | 2026-01-09 01:07:47.391234 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-09 01:07:47.391240 | orchestrator | Friday 09 January 2026 01:04:30 +0000 (0:00:00.655) 0:00:01.629 ******** 2026-01-09 01:07:47.391249 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 01:07:47.391257 | orchestrator | 2026-01-09 01:07:47.391279 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-09 01:07:47.391286 | orchestrator | Friday 09 January 2026 01:04:31 +0000 (0:00:01.305) 0:00:02.934 ******** 2026-01-09 01:07:47.391298 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-09 01:07:47.391308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391321 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391431 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391436 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391478 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-09 01:07:47.391485 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391515 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391522 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391555 | orchestrator | 2026-01-09 01:07:47.391559 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-09 01:07:47.391563 | orchestrator | Friday 09 January 2026 01:04:34 +0000 (0:00:02.612) 0:00:05.547 ******** 2026-01-09 01:07:47.391567 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 01:07:47.391571 | orchestrator | 2026-01-09 01:07:47.391575 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-09 01:07:47.391579 | orchestrator | Friday 09 January 2026 01:04:35 +0000 (0:00:01.251) 0:00:06.798 ******** 2026-01-09 01:07:47.391586 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-09 01:07:47.391590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391615 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391631 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.391635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391650 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391672 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-09 01:07:47.391677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391702 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391713 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.391735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.391754 | orchestrator | 2026-01-09 01:07:47.391759 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-09 01:07:47.391995 | orchestrator | Friday 09 January 2026 01:04:41 +0000 (0:00:06.459) 0:00:13.258 ******** 2026-01-09 01:07:47.392013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.392023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392054 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-09 01:07:47.392059 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.392064 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392070 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-09 01:07:47.392080 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.392122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392157 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:07:47.392163 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:07:47.392170 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:07:47.392180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.392191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392217 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:07:47.392461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.392477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392500 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.392513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.392517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392525 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.392529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.392533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392555 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.392559 | orchestrator | 2026-01-09 01:07:47.392563 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-09 01:07:47.392567 | orchestrator | Friday 09 January 2026 01:04:43 +0000 (0:00:01.474) 0:00:14.732 ******** 2026-01-09 01:07:47.392571 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-09 01:07:47.392586 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.392590 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392660 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-09 01:07:47.392671 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392677 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:07:47.392701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.392709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392742 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:07:47.392749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.392756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.392805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.392837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.392848 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:07:47.393269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.393278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-09 01:07:47.393282 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:07:47.393304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.393309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.393322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.393326 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.393336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.393340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.393344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.393348 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.393352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-09 01:07:47.393356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.393419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-09 01:07:47.393430 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.393434 | orchestrator | 2026-01-09 01:07:47.393439 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-09 01:07:47.393443 | orchestrator | Friday 09 January 2026 01:04:45 +0000 (0:00:02.459) 0:00:17.191 ******** 2026-01-09 01:07:47.393447 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-09 01:07:47.393455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.393460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.393463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.393468 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.393471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.393493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.393500 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.393508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.393522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.393528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.393535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.393542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.393548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.393579 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.393586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.393593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.393603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.393610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.393615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.393622 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-09 01:07:47.393653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.393661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.393667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.393683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.393691 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.393697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.393703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.393716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.393720 | orchestrator | 2026-01-09 01:07:47.393724 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-09 01:07:47.393729 | orchestrator | Friday 09 January 2026 01:04:54 +0000 (0:00:08.844) 0:00:26.036 ******** 2026-01-09 01:07:47.393733 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:07:47.393737 | orchestrator | 2026-01-09 01:07:47.393741 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-09 01:07:47.393759 | orchestrator | Friday 09 January 2026 01:04:56 +0000 (0:00:01.823) 0:00:27.859 ******** 2026-01-09 01:07:47.393764 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084795, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1221163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393769 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084795, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1221163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393777 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084795, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1221163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.393781 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084795, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1221163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393785 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084795, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1221163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393793 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084795, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1221163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393808 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084795, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1221163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393813 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1085007, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2883408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393818 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1085007, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2883408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393825 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1085007, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2883408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393829 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1085007, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2883408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393837 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1085007, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2883408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393841 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084789, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393856 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1085007, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2883408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393860 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084789, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393864 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1085007, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2883408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.393871 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084789, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393876 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084789, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393884 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084789, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393888 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085001, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2864585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393904 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084789, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393910 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085001, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2864585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393915 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085001, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2864585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393923 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085001, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2864585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393928 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085001, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2864585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393936 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084783, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1194723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393941 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084800, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1226819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393945 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085001, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2864585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393963 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084783, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1194723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393968 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084783, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1194723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393976 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084783, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1194723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393980 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084783, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1194723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393989 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084964, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2861133, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.393993 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084789, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.393998 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084800, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1226819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394092 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084800, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1226819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394105 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084783, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1194723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394116 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084800, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1226819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394130 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084800, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1226819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394135 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084801, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1236033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394140 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084964, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2861133, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394144 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084964, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2861133, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394165 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084793, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394171 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084964, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2861133, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394178 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084800, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1226819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394189 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084964, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2861133, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394193 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085005, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2878692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394198 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084801, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1236033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394202 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084801, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1236033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394222 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084964, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2861133, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394227 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084801, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1236033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394235 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084777, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1190295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394244 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084801, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1236033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394248 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084793, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394253 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085024, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394258 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085005, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2878692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394276 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084793, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394281 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084801, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1236033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394289 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084793, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394296 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084793, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394300 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085001, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2864585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394304 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084777, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1190295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394308 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085005, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2878692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394326 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085004, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2876754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394330 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085024, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394341 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084793, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394345 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085005, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2878692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394349 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085005, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2878692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394353 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085004, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2876754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394357 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084777, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1190295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394414 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084783, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1194723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394420 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085005, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2878692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394428 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084784, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1204793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394435 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084777, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1190295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394440 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085024, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394444 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084777, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1190295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394447 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084777, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1190295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394456 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084784, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1204793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394461 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085004, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2876754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394470 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085024, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394480 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084780, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1193457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394490 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085024, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394498 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085024, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394504 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084784, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1204793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394517 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084780, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1193457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394546 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084808, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394553 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085004, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2876754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394563 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085004, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2876754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394569 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085004, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2876754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394575 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084803, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394582 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084780, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1193457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394596 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084784, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1204793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394605 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084800, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1226819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394609 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084784, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1204793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394616 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084808, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394620 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084780, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1193457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394703 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084780, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1193457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394707 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084784, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1204793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394717 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084808, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394730 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084808, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394735 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084780, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1193457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394742 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085022, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394746 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:07:47.394750 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084803, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394754 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084808, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394758 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084803, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394768 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084803, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394772 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085022, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394776 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.394780 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085022, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394784 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:07:47.394791 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084803, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394795 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085022, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394799 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.394803 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084808, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394807 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085022, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394814 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:07:47.394821 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084964, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2861133, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394825 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084803, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394829 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085022, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-09 01:07:47.394836 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.394840 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084801, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1236033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394844 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084793, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1216037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394848 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085005, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2878692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394856 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084777, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1190295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394863 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085024, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394867 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085004, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2876754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394871 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084784, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1204793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394878 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084780, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1193457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394882 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084808, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394886 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084803, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1274571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394894 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085022, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.2914584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-09 01:07:47.394898 | orchestrator | 2026-01-09 01:07:47.394902 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-09 01:07:47.394908 | orchestrator | Friday 09 January 2026 01:05:23 +0000 (0:00:27.073) 0:00:54.932 ******** 2026-01-09 01:07:47.394917 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:07:47.394923 | orchestrator | 2026-01-09 01:07:47.394929 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-09 01:07:47.394935 | orchestrator | Friday 09 January 2026 01:05:24 +0000 (0:00:00.711) 0:00:55.644 ******** 2026-01-09 01:07:47.394941 | orchestrator | [WARNING]: Skipped 2026-01-09 01:07:47.394947 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.394954 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-09 01:07:47.394960 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.394965 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-09 01:07:47.394972 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 01:07:47.394978 | orchestrator | [WARNING]: Skipped 2026-01-09 01:07:47.394984 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.394990 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-09 01:07:47.394995 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.395001 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-09 01:07:47.395007 | orchestrator | [WARNING]: Skipped 2026-01-09 01:07:47.395014 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.395019 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-09 01:07:47.395025 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.395031 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-09 01:07:47.395036 | orchestrator | [WARNING]: Skipped 2026-01-09 01:07:47.395043 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.395048 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-09 01:07:47.395054 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.395060 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-09 01:07:47.395067 | orchestrator | [WARNING]: Skipped 2026-01-09 01:07:47.395077 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.395083 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-09 01:07:47.395089 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.395095 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-09 01:07:47.395101 | orchestrator | [WARNING]: Skipped 2026-01-09 01:07:47.395106 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.395112 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-09 01:07:47.395125 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.395131 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-09 01:07:47.395138 | orchestrator | [WARNING]: Skipped 2026-01-09 01:07:47.395143 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.395149 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-09 01:07:47.395155 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-09 01:07:47.395161 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-09 01:07:47.395167 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:07:47.395173 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-09 01:07:47.395179 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-09 01:07:47.395185 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-09 01:07:47.395191 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-09 01:07:47.395197 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-09 01:07:47.395204 | orchestrator | 2026-01-09 01:07:47.395211 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-09 01:07:47.395217 | orchestrator | Friday 09 January 2026 01:05:26 +0000 (0:00:02.360) 0:00:58.005 ******** 2026-01-09 01:07:47.395223 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-09 01:07:47.395231 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:07:47.395237 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-09 01:07:47.395243 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:07:47.395250 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-09 01:07:47.395256 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:07:47.395262 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-09 01:07:47.395268 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.395275 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-09 01:07:47.395280 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.395286 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-09 01:07:47.395291 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.395297 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-09 01:07:47.395302 | orchestrator | 2026-01-09 01:07:47.395308 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-09 01:07:47.395314 | orchestrator | Friday 09 January 2026 01:05:42 +0000 (0:00:16.499) 0:01:14.504 ******** 2026-01-09 01:07:47.395326 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-09 01:07:47.395333 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:07:47.395340 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-09 01:07:47.395347 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:07:47.395353 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-09 01:07:47.395377 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:07:47.395383 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-09 01:07:47.395389 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.395395 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-09 01:07:47.395401 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.395406 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-09 01:07:47.395418 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.395423 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-09 01:07:47.395429 | orchestrator | 2026-01-09 01:07:47.395436 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-09 01:07:47.395442 | orchestrator | Friday 09 January 2026 01:05:46 +0000 (0:00:03.594) 0:01:18.100 ******** 2026-01-09 01:07:47.395450 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-09 01:07:47.395455 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:07:47.395460 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-09 01:07:47.395465 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-09 01:07:47.395474 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-09 01:07:47.395478 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:07:47.395483 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:07:47.395487 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-09 01:07:47.395492 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.395496 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-09 01:07:47.395501 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.395506 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-09 01:07:47.395510 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.395515 | orchestrator | 2026-01-09 01:07:47.395519 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-09 01:07:47.395524 | orchestrator | Friday 09 January 2026 01:05:49 +0000 (0:00:03.215) 0:01:21.316 ******** 2026-01-09 01:07:47.395529 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:07:47.395533 | orchestrator | 2026-01-09 01:07:47.395538 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-09 01:07:47.395542 | orchestrator | Friday 09 January 2026 01:05:50 +0000 (0:00:00.937) 0:01:22.254 ******** 2026-01-09 01:07:47.395547 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:07:47.395552 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:07:47.395556 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:07:47.395561 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:07:47.395566 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.395569 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.395573 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.395577 | orchestrator | 2026-01-09 01:07:47.395581 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-09 01:07:47.395585 | orchestrator | Friday 09 January 2026 01:05:51 +0000 (0:00:00.521) 0:01:22.775 ******** 2026-01-09 01:07:47.395589 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:07:47.395592 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.395596 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.395600 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.395604 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:07:47.395607 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:07:47.395611 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:07:47.395615 | orchestrator | 2026-01-09 01:07:47.395618 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-09 01:07:47.395622 | orchestrator | Friday 09 January 2026 01:05:53 +0000 (0:00:02.570) 0:01:25.346 ******** 2026-01-09 01:07:47.395630 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-09 01:07:47.395634 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-09 01:07:47.395638 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:07:47.395641 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:07:47.395645 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-09 01:07:47.395649 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:07:47.395653 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-09 01:07:47.395657 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:07:47.395664 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-09 01:07:47.395668 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.395672 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-09 01:07:47.395675 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.395679 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-09 01:07:47.395683 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.395687 | orchestrator | 2026-01-09 01:07:47.395691 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-09 01:07:47.395694 | orchestrator | Friday 09 January 2026 01:05:55 +0000 (0:00:01.604) 0:01:26.950 ******** 2026-01-09 01:07:47.395698 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-09 01:07:47.395704 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:07:47.395710 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-09 01:07:47.395716 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:07:47.395722 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-09 01:07:47.395727 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:07:47.395733 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-09 01:07:47.395739 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.395745 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-09 01:07:47.395751 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.395757 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-09 01:07:47.395763 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-09 01:07:47.395773 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.395779 | orchestrator | 2026-01-09 01:07:47.395785 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-09 01:07:47.395790 | orchestrator | Friday 09 January 2026 01:05:56 +0000 (0:00:01.389) 0:01:28.340 ******** 2026-01-09 01:07:47.395796 | orchestrator | [WARNING]: Skipped 2026-01-09 01:07:47.395802 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-09 01:07:47.395808 | orchestrator | due to this access issue: 2026-01-09 01:07:47.395815 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-09 01:07:47.395821 | orchestrator | not a directory 2026-01-09 01:07:47.395827 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:07:47.395834 | orchestrator | 2026-01-09 01:07:47.395840 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-09 01:07:47.395846 | orchestrator | Friday 09 January 2026 01:05:57 +0000 (0:00:00.996) 0:01:29.337 ******** 2026-01-09 01:07:47.395852 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:07:47.395871 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:07:47.395877 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:07:47.395995 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:07:47.396005 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.396011 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.396016 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.396021 | orchestrator | 2026-01-09 01:07:47.396027 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-09 01:07:47.396033 | orchestrator | Friday 09 January 2026 01:05:58 +0000 (0:00:00.822) 0:01:30.159 ******** 2026-01-09 01:07:47.396039 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:07:47.396045 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:07:47.396051 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:07:47.396056 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:07:47.396062 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:07:47.396068 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:07:47.396074 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:07:47.396080 | orchestrator | 2026-01-09 01:07:47.396087 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-09 01:07:47.396093 | orchestrator | Friday 09 January 2026 01:05:59 +0000 (0:00:00.735) 0:01:30.894 ******** 2026-01-09 01:07:47.396099 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-09 01:07:47.396111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.396117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.396122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.396130 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.396140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.396144 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.396150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.396154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.396163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.396167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-09 01:07:47.396172 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.396180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.396189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.396194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.396198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.396202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.396212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.396216 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-09 01:07:47.396228 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.396232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.396236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.396240 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.396244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.396251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.396256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-09 01:07:47.396260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.396270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.396274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-09 01:07:47.396278 | orchestrator | 2026-01-09 01:07:47.396282 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-09 01:07:47.396286 | orchestrator | Friday 09 January 2026 01:06:04 +0000 (0:00:04.860) 0:01:35.754 ******** 2026-01-09 01:07:47.396290 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-09 01:07:47.396294 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:07:47.396297 | orchestrator | 2026-01-09 01:07:47.396301 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-09 01:07:47.396305 | orchestrator | Friday 09 January 2026 01:06:05 +0000 (0:00:01.338) 0:01:37.093 ******** 2026-01-09 01:07:47.396309 | orchestrator | 2026-01-09 01:07:47.396312 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-09 01:07:47.396316 | orchestrator | Friday 09 January 2026 01:06:05 +0000 (0:00:00.066) 0:01:37.159 ******** 2026-01-09 01:07:47.396320 | orchestrator | 2026-01-09 01:07:47.396340 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-09 01:07:47.396344 | orchestrator | Friday 09 January 2026 01:06:05 +0000 (0:00:00.069) 0:01:37.229 ******** 2026-01-09 01:07:47.396348 | orchestrator | 2026-01-09 01:07:47.396351 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-09 01:07:47.396356 | orchestrator | Friday 09 January 2026 01:06:05 +0000 (0:00:00.154) 0:01:37.384 ******** 2026-01-09 01:07:47.396381 | orchestrator | 2026-01-09 01:07:47.396385 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-09 01:07:47.396389 | orchestrator | Friday 09 January 2026 01:06:06 +0000 (0:00:00.373) 0:01:37.757 ******** 2026-01-09 01:07:47.396393 | orchestrator | 2026-01-09 01:07:47.396397 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-09 01:07:47.396401 | orchestrator | Friday 09 January 2026 01:06:06 +0000 (0:00:00.102) 0:01:37.859 ******** 2026-01-09 01:07:47.396405 | orchestrator | 2026-01-09 01:07:47.396408 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-09 01:07:47.396412 | orchestrator | Friday 09 January 2026 01:06:06 +0000 (0:00:00.140) 0:01:38.000 ******** 2026-01-09 01:07:47.396416 | orchestrator | 2026-01-09 01:07:47.396420 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-09 01:07:47.396423 | orchestrator | Friday 09 January 2026 01:06:06 +0000 (0:00:00.086) 0:01:38.087 ******** 2026-01-09 01:07:47.396427 | orchestrator | changed: [testbed-manager] 2026-01-09 01:07:47.396431 | orchestrator | 2026-01-09 01:07:47.396435 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-09 01:07:47.396445 | orchestrator | Friday 09 January 2026 01:06:21 +0000 (0:00:15.254) 0:01:53.342 ******** 2026-01-09 01:07:47.396449 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:07:47.396453 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:07:47.396457 | orchestrator | changed: [testbed-manager] 2026-01-09 01:07:47.396460 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:07:47.396464 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:07:47.396468 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:07:47.396472 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:07:47.396475 | orchestrator | 2026-01-09 01:07:47.396479 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-09 01:07:47.396483 | orchestrator | Friday 09 January 2026 01:06:38 +0000 (0:00:16.488) 0:02:09.831 ******** 2026-01-09 01:07:47.396487 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:07:47.396490 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:07:47.396494 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:07:47.396498 | orchestrator | 2026-01-09 01:07:47.396502 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-09 01:07:47.396506 | orchestrator | Friday 09 January 2026 01:06:44 +0000 (0:00:06.148) 0:02:15.979 ******** 2026-01-09 01:07:47.396510 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:07:47.396513 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:07:47.396517 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:07:47.396521 | orchestrator | 2026-01-09 01:07:47.396525 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-09 01:07:47.396528 | orchestrator | Friday 09 January 2026 01:06:51 +0000 (0:00:07.169) 0:02:23.149 ******** 2026-01-09 01:07:47.396532 | orchestrator | changed: [testbed-manager] 2026-01-09 01:07:47.396536 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:07:47.396540 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:07:47.396543 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:07:47.396547 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:07:47.396551 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:07:47.396554 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:07:47.396559 | orchestrator | 2026-01-09 01:07:47.396563 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-09 01:07:47.396567 | orchestrator | Friday 09 January 2026 01:07:06 +0000 (0:00:15.135) 0:02:38.285 ******** 2026-01-09 01:07:47.396570 | orchestrator | changed: [testbed-manager] 2026-01-09 01:07:47.396574 | orchestrator | 2026-01-09 01:07:47.396581 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-09 01:07:47.396585 | orchestrator | Friday 09 January 2026 01:07:19 +0000 (0:00:12.520) 0:02:50.805 ******** 2026-01-09 01:07:47.396589 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:07:47.396593 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:07:47.396596 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:07:47.396600 | orchestrator | 2026-01-09 01:07:47.396604 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-09 01:07:47.396607 | orchestrator | Friday 09 January 2026 01:07:29 +0000 (0:00:09.829) 0:03:00.635 ******** 2026-01-09 01:07:47.396611 | orchestrator | changed: [testbed-manager] 2026-01-09 01:07:47.396615 | orchestrator | 2026-01-09 01:07:47.396619 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-09 01:07:47.396623 | orchestrator | Friday 09 January 2026 01:07:34 +0000 (0:00:04.966) 0:03:05.602 ******** 2026-01-09 01:07:47.396627 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:07:47.396632 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:07:47.396637 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:07:47.396641 | orchestrator | 2026-01-09 01:07:47.396645 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:07:47.396651 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-09 01:07:47.396660 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-09 01:07:47.396664 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-09 01:07:47.396669 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-09 01:07:47.396674 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-09 01:07:47.396679 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-09 01:07:47.396683 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-09 01:07:47.396687 | orchestrator | 2026-01-09 01:07:47.396692 | orchestrator | 2026-01-09 01:07:47.396744 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:07:47.396750 | orchestrator | Friday 09 January 2026 01:07:44 +0000 (0:00:09.949) 0:03:15.551 ******** 2026-01-09 01:07:47.396754 | orchestrator | =============================================================================== 2026-01-09 01:07:47.396758 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.07s 2026-01-09 01:07:47.396763 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.50s 2026-01-09 01:07:47.396767 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.49s 2026-01-09 01:07:47.396772 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.25s 2026-01-09 01:07:47.396779 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.14s 2026-01-09 01:07:47.396784 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.52s 2026-01-09 01:07:47.396788 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.95s 2026-01-09 01:07:47.396793 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.83s 2026-01-09 01:07:47.396797 | orchestrator | prometheus : Copying over config.json files ----------------------------- 8.84s 2026-01-09 01:07:47.396802 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 7.17s 2026-01-09 01:07:47.396806 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.46s 2026-01-09 01:07:47.396811 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.15s 2026-01-09 01:07:47.396815 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.97s 2026-01-09 01:07:47.396820 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.86s 2026-01-09 01:07:47.396825 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.60s 2026-01-09 01:07:47.396829 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.22s 2026-01-09 01:07:47.396834 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.61s 2026-01-09 01:07:47.396838 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.57s 2026-01-09 01:07:47.396843 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.46s 2026-01-09 01:07:47.396847 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.36s 2026-01-09 01:07:47.396852 | orchestrator | 2026-01-09 01:07:47 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:47.396857 | orchestrator | 2026-01-09 01:07:47 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:07:47.396861 | orchestrator | 2026-01-09 01:07:47 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:47.396911 | orchestrator | 2026-01-09 01:07:47 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:47.396919 | orchestrator | 2026-01-09 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:50.430443 | orchestrator | 2026-01-09 01:07:50 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:50.430551 | orchestrator | 2026-01-09 01:07:50 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:07:50.432072 | orchestrator | 2026-01-09 01:07:50 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:50.432117 | orchestrator | 2026-01-09 01:07:50 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:50.432122 | orchestrator | 2026-01-09 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:53.490641 | orchestrator | 2026-01-09 01:07:53 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:53.493848 | orchestrator | 2026-01-09 01:07:53 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:07:53.495923 | orchestrator | 2026-01-09 01:07:53 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:53.498115 | orchestrator | 2026-01-09 01:07:53 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:53.498188 | orchestrator | 2026-01-09 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:56.549255 | orchestrator | 2026-01-09 01:07:56 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:56.550065 | orchestrator | 2026-01-09 01:07:56 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:07:56.550987 | orchestrator | 2026-01-09 01:07:56 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:56.551914 | orchestrator | 2026-01-09 01:07:56 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:56.552308 | orchestrator | 2026-01-09 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:07:59.591145 | orchestrator | 2026-01-09 01:07:59 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:07:59.592292 | orchestrator | 2026-01-09 01:07:59 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:07:59.593079 | orchestrator | 2026-01-09 01:07:59 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:07:59.594312 | orchestrator | 2026-01-09 01:07:59 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:07:59.594500 | orchestrator | 2026-01-09 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:02.661146 | orchestrator | 2026-01-09 01:08:02 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:02.664452 | orchestrator | 2026-01-09 01:08:02 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:02.667311 | orchestrator | 2026-01-09 01:08:02 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:08:02.669785 | orchestrator | 2026-01-09 01:08:02 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:02.669844 | orchestrator | 2026-01-09 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:05.718818 | orchestrator | 2026-01-09 01:08:05 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:05.721590 | orchestrator | 2026-01-09 01:08:05 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:05.724217 | orchestrator | 2026-01-09 01:08:05 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:08:05.726254 | orchestrator | 2026-01-09 01:08:05 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:05.726311 | orchestrator | 2026-01-09 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:08.767440 | orchestrator | 2026-01-09 01:08:08 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:08.769596 | orchestrator | 2026-01-09 01:08:08 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:08.772360 | orchestrator | 2026-01-09 01:08:08 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:08:08.775197 | orchestrator | 2026-01-09 01:08:08 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:08.775245 | orchestrator | 2026-01-09 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:11.816951 | orchestrator | 2026-01-09 01:08:11 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:11.820617 | orchestrator | 2026-01-09 01:08:11 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:11.821688 | orchestrator | 2026-01-09 01:08:11 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state STARTED 2026-01-09 01:08:11.823427 | orchestrator | 2026-01-09 01:08:11 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:11.823460 | orchestrator | 2026-01-09 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:14.870276 | orchestrator | 2026-01-09 01:08:14 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:14.872430 | orchestrator | 2026-01-09 01:08:14 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:14.874744 | orchestrator | 2026-01-09 01:08:14 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:14.878075 | orchestrator | 2026-01-09 01:08:14 | INFO  | Task 5dae379d-2fcd-4e61-94ee-107c5c2a0d7a is in state SUCCESS 2026-01-09 01:08:14.879991 | orchestrator | 2026-01-09 01:08:14.880026 | orchestrator | 2026-01-09 01:08:14.880033 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:08:14.880040 | orchestrator | 2026-01-09 01:08:14.880046 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:08:14.880051 | orchestrator | Friday 09 January 2026 01:05:15 +0000 (0:00:00.242) 0:00:00.242 ******** 2026-01-09 01:08:14.880056 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:08:14.880063 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:08:14.880068 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:08:14.880073 | orchestrator | 2026-01-09 01:08:14.880078 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:08:14.880082 | orchestrator | Friday 09 January 2026 01:05:15 +0000 (0:00:00.260) 0:00:00.503 ******** 2026-01-09 01:08:14.880088 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-09 01:08:14.880093 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-09 01:08:14.880098 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-09 01:08:14.880102 | orchestrator | 2026-01-09 01:08:14.880107 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-09 01:08:14.880111 | orchestrator | 2026-01-09 01:08:14.880116 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-09 01:08:14.880121 | orchestrator | Friday 09 January 2026 01:05:16 +0000 (0:00:00.370) 0:00:00.873 ******** 2026-01-09 01:08:14.880146 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:08:14.880153 | orchestrator | 2026-01-09 01:08:14.880158 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-01-09 01:08:14.880162 | orchestrator | Friday 09 January 2026 01:05:16 +0000 (0:00:00.495) 0:00:01.369 ******** 2026-01-09 01:08:14.880167 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-09 01:08:14.880172 | orchestrator | 2026-01-09 01:08:14.880177 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-01-09 01:08:14.880181 | orchestrator | Friday 09 January 2026 01:05:20 +0000 (0:00:03.604) 0:00:04.974 ******** 2026-01-09 01:08:14.880187 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-09 01:08:14.880194 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-09 01:08:14.880201 | orchestrator | 2026-01-09 01:08:14.880207 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-09 01:08:14.880213 | orchestrator | Friday 09 January 2026 01:05:28 +0000 (0:00:08.237) 0:00:13.211 ******** 2026-01-09 01:08:14.880220 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-09 01:08:14.880228 | orchestrator | 2026-01-09 01:08:14.880235 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-09 01:08:14.880241 | orchestrator | Friday 09 January 2026 01:05:31 +0000 (0:00:03.245) 0:00:16.457 ******** 2026-01-09 01:08:14.880366 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-09 01:08:14.880376 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-09 01:08:14.880417 | orchestrator | 2026-01-09 01:08:14.880424 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-09 01:08:14.880430 | orchestrator | Friday 09 January 2026 01:05:35 +0000 (0:00:04.055) 0:00:20.512 ******** 2026-01-09 01:08:14.880437 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-09 01:08:14.880443 | orchestrator | 2026-01-09 01:08:14.880449 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-01-09 01:08:14.880456 | orchestrator | Friday 09 January 2026 01:05:39 +0000 (0:00:03.405) 0:00:23.918 ******** 2026-01-09 01:08:14.880463 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-09 01:08:14.880469 | orchestrator | 2026-01-09 01:08:14.880475 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-09 01:08:14.880482 | orchestrator | Friday 09 January 2026 01:05:43 +0000 (0:00:04.092) 0:00:28.010 ******** 2026-01-09 01:08:14.880519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.880536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.880545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.880550 | orchestrator | 2026-01-09 01:08:14.880555 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-09 01:08:14.880560 | orchestrator | Friday 09 January 2026 01:05:49 +0000 (0:00:06.039) 0:00:34.049 ******** 2026-01-09 01:08:14.880565 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:08:14.880574 | orchestrator | 2026-01-09 01:08:14.880578 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-09 01:08:14.880588 | orchestrator | Friday 09 January 2026 01:05:50 +0000 (0:00:00.670) 0:00:34.719 ******** 2026-01-09 01:08:14.880593 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:08:14.880598 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:08:14.880602 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:08:14.880606 | orchestrator | 2026-01-09 01:08:14.880611 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-09 01:08:14.880615 | orchestrator | Friday 09 January 2026 01:05:54 +0000 (0:00:03.990) 0:00:38.710 ******** 2026-01-09 01:08:14.880620 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-09 01:08:14.880626 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-09 01:08:14.880630 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-09 01:08:14.880635 | orchestrator | 2026-01-09 01:08:14.880639 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-09 01:08:14.880643 | orchestrator | Friday 09 January 2026 01:05:55 +0000 (0:00:01.659) 0:00:40.369 ******** 2026-01-09 01:08:14.880648 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-09 01:08:14.880652 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-09 01:08:14.880656 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-09 01:08:14.880661 | orchestrator | 2026-01-09 01:08:14.880666 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-09 01:08:14.880670 | orchestrator | Friday 09 January 2026 01:05:57 +0000 (0:00:01.436) 0:00:41.806 ******** 2026-01-09 01:08:14.880675 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:08:14.880680 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:08:14.880684 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:08:14.880689 | orchestrator | 2026-01-09 01:08:14.880694 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-09 01:08:14.880698 | orchestrator | Friday 09 January 2026 01:05:57 +0000 (0:00:00.583) 0:00:42.389 ******** 2026-01-09 01:08:14.880703 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.880707 | orchestrator | 2026-01-09 01:08:14.880712 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-09 01:08:14.880717 | orchestrator | Friday 09 January 2026 01:05:58 +0000 (0:00:00.296) 0:00:42.685 ******** 2026-01-09 01:08:14.880722 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.880726 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:08:14.880731 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:08:14.880736 | orchestrator | 2026-01-09 01:08:14.880740 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-09 01:08:14.880743 | orchestrator | Friday 09 January 2026 01:05:58 +0000 (0:00:00.265) 0:00:42.951 ******** 2026-01-09 01:08:14.880747 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:08:14.880751 | orchestrator | 2026-01-09 01:08:14.880755 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-09 01:08:14.880758 | orchestrator | Friday 09 January 2026 01:05:58 +0000 (0:00:00.477) 0:00:43.429 ******** 2026-01-09 01:08:14.880766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.880778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.880786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.880794 | orchestrator | 2026-01-09 01:08:14.880798 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-09 01:08:14.880802 | orchestrator | Friday 09 January 2026 01:06:04 +0000 (0:00:05.729) 0:00:49.159 ******** 2026-01-09 01:08:14.880810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-09 01:08:14.880814 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.880818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-09 01:08:14.880835 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:08:14.880850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-09 01:08:14.880856 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:08:14.880861 | orchestrator | 2026-01-09 01:08:14.880870 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-09 01:08:14.880877 | orchestrator | Friday 09 January 2026 01:06:08 +0000 (0:00:03.930) 0:00:53.089 ******** 2026-01-09 01:08:14.880885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-09 01:08:14.880896 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.880906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-09 01:08:14.880913 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:08:14.880925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-09 01:08:14.880932 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:08:14.880938 | orchestrator | 2026-01-09 01:08:14.880944 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-09 01:08:14.880950 | orchestrator | Friday 09 January 2026 01:06:12 +0000 (0:00:03.663) 0:00:56.753 ******** 2026-01-09 01:08:14.880957 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.880963 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:08:14.880971 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:08:14.880979 | orchestrator | 2026-01-09 01:08:14.880983 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-09 01:08:14.880987 | orchestrator | Friday 09 January 2026 01:06:15 +0000 (0:00:03.557) 0:01:00.310 ******** 2026-01-09 01:08:14.880994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.881004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.881011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.881019 | orchestrator | 2026-01-09 01:08:14.881023 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-09 01:08:14.881026 | orchestrator | Friday 09 January 2026 01:06:20 +0000 (0:00:04.786) 0:01:05.097 ******** 2026-01-09 01:08:14.881030 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:08:14.881034 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:08:14.881038 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:08:14.881041 | orchestrator | 2026-01-09 01:08:14.881045 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-09 01:08:14.881049 | orchestrator | Friday 09 January 2026 01:06:31 +0000 (0:00:10.940) 0:01:16.037 ******** 2026-01-09 01:08:14.881053 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.881056 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:08:14.881060 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:08:14.881064 | orchestrator | 2026-01-09 01:08:14.881067 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-01-09 01:08:14.881071 | orchestrator | Friday 09 January 2026 01:06:35 +0000 (0:00:03.662) 0:01:19.699 ******** 2026-01-09 01:08:14.881075 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.881079 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:08:14.881083 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:08:14.881086 | orchestrator | 2026-01-09 01:08:14.881090 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-09 01:08:14.881094 | orchestrator | Friday 09 January 2026 01:06:38 +0000 (0:00:03.288) 0:01:22.988 ******** 2026-01-09 01:08:14.881098 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:08:14.881103 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.881108 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:08:14.881111 | orchestrator | 2026-01-09 01:08:14.881115 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-09 01:08:14.881119 | orchestrator | Friday 09 January 2026 01:06:42 +0000 (0:00:04.207) 0:01:27.195 ******** 2026-01-09 01:08:14.881123 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.881126 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:08:14.881130 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:08:14.881134 | orchestrator | 2026-01-09 01:08:14.881137 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-09 01:08:14.881141 | orchestrator | Friday 09 January 2026 01:06:47 +0000 (0:00:04.973) 0:01:32.169 ******** 2026-01-09 01:08:14.881145 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.881149 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:08:14.881152 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:08:14.881156 | orchestrator | 2026-01-09 01:08:14.881160 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-09 01:08:14.881167 | orchestrator | Friday 09 January 2026 01:06:47 +0000 (0:00:00.331) 0:01:32.500 ******** 2026-01-09 01:08:14.881171 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-09 01:08:14.881175 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:08:14.881179 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-09 01:08:14.881183 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.881187 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-09 01:08:14.881190 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:08:14.881194 | orchestrator | 2026-01-09 01:08:14.881198 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-09 01:08:14.881202 | orchestrator | Friday 09 January 2026 01:06:51 +0000 (0:00:03.895) 0:01:36.396 ******** 2026-01-09 01:08:14.881205 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:08:14.881209 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:08:14.881213 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:08:14.881216 | orchestrator | 2026-01-09 01:08:14.881220 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-01-09 01:08:14.881224 | orchestrator | Friday 09 January 2026 01:06:59 +0000 (0:00:07.560) 0:01:43.956 ******** 2026-01-09 01:08:14.881231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.881241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.881249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-09 01:08:14.881253 | orchestrator | 2026-01-09 01:08:14.881257 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-09 01:08:14.881260 | orchestrator | Friday 09 January 2026 01:07:03 +0000 (0:00:04.468) 0:01:48.424 ******** 2026-01-09 01:08:14.881264 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:08:14.881268 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:08:14.881272 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:08:14.881275 | orchestrator | 2026-01-09 01:08:14.881281 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-09 01:08:14.881285 | orchestrator | Friday 09 January 2026 01:07:04 +0000 (0:00:00.521) 0:01:48.946 ******** 2026-01-09 01:08:14.881289 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:08:14.881293 | orchestrator | 2026-01-09 01:08:14.881297 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-09 01:08:14.881300 | orchestrator | Friday 09 January 2026 01:07:06 +0000 (0:00:02.063) 0:01:51.009 ******** 2026-01-09 01:08:14.881304 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:08:14.881308 | orchestrator | 2026-01-09 01:08:14.881311 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-09 01:08:14.881315 | orchestrator | Friday 09 January 2026 01:07:08 +0000 (0:00:02.060) 0:01:53.070 ******** 2026-01-09 01:08:14.881319 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:08:14.881323 | orchestrator | 2026-01-09 01:08:14.881326 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-09 01:08:14.881330 | orchestrator | Friday 09 January 2026 01:07:10 +0000 (0:00:01.985) 0:01:55.055 ******** 2026-01-09 01:08:14.881337 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:08:14.881340 | orchestrator | 2026-01-09 01:08:14.881344 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-09 01:08:14.881348 | orchestrator | Friday 09 January 2026 01:07:38 +0000 (0:00:28.168) 0:02:23.223 ******** 2026-01-09 01:08:14.881352 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:08:14.881355 | orchestrator | 2026-01-09 01:08:14.881359 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-09 01:08:14.881363 | orchestrator | Friday 09 January 2026 01:07:41 +0000 (0:00:02.946) 0:02:26.169 ******** 2026-01-09 01:08:14.881367 | orchestrator | 2026-01-09 01:08:14.881373 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-09 01:08:14.881377 | orchestrator | Friday 09 January 2026 01:07:41 +0000 (0:00:00.196) 0:02:26.366 ******** 2026-01-09 01:08:14.881397 | orchestrator | 2026-01-09 01:08:14.881401 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-09 01:08:14.881405 | orchestrator | Friday 09 January 2026 01:07:41 +0000 (0:00:00.057) 0:02:26.424 ******** 2026-01-09 01:08:14.881409 | orchestrator | 2026-01-09 01:08:14.881413 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-09 01:08:14.881416 | orchestrator | Friday 09 January 2026 01:07:41 +0000 (0:00:00.060) 0:02:26.485 ******** 2026-01-09 01:08:14.881420 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:08:14.881424 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:08:14.881428 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:08:14.881431 | orchestrator | 2026-01-09 01:08:14.881435 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:08:14.881440 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-09 01:08:14.881446 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-09 01:08:14.881449 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-09 01:08:14.881453 | orchestrator | 2026-01-09 01:08:14.881457 | orchestrator | 2026-01-09 01:08:14.881461 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:08:14.881464 | orchestrator | Friday 09 January 2026 01:08:11 +0000 (0:00:29.746) 0:02:56.231 ******** 2026-01-09 01:08:14.881468 | orchestrator | =============================================================================== 2026-01-09 01:08:14.881472 | orchestrator | glance : Restart glance-api container ---------------------------------- 29.75s 2026-01-09 01:08:14.881476 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.17s 2026-01-09 01:08:14.881480 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 10.94s 2026-01-09 01:08:14.881486 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 8.24s 2026-01-09 01:08:14.881494 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 7.56s 2026-01-09 01:08:14.881502 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.04s 2026-01-09 01:08:14.881508 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.73s 2026-01-09 01:08:14.881514 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.97s 2026-01-09 01:08:14.881520 | orchestrator | glance : Copying over config.json files for services -------------------- 4.79s 2026-01-09 01:08:14.881526 | orchestrator | glance : Check glance containers ---------------------------------------- 4.47s 2026-01-09 01:08:14.881531 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.21s 2026-01-09 01:08:14.881537 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.09s 2026-01-09 01:08:14.881552 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.06s 2026-01-09 01:08:14.881571 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.99s 2026-01-09 01:08:14.881578 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.93s 2026-01-09 01:08:14.881585 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.90s 2026-01-09 01:08:14.881590 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.66s 2026-01-09 01:08:14.881594 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.66s 2026-01-09 01:08:14.881598 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.60s 2026-01-09 01:08:14.881605 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.56s 2026-01-09 01:08:14.881609 | orchestrator | 2026-01-09 01:08:14 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:14.881613 | orchestrator | 2026-01-09 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:17.938266 | orchestrator | 2026-01-09 01:08:17 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:17.941154 | orchestrator | 2026-01-09 01:08:17 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:17.943348 | orchestrator | 2026-01-09 01:08:17 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:17.945730 | orchestrator | 2026-01-09 01:08:17 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:17.945800 | orchestrator | 2026-01-09 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:20.988344 | orchestrator | 2026-01-09 01:08:20 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:20.990680 | orchestrator | 2026-01-09 01:08:20 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:20.992497 | orchestrator | 2026-01-09 01:08:20 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:20.994429 | orchestrator | 2026-01-09 01:08:20 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:20.995005 | orchestrator | 2026-01-09 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:24.045183 | orchestrator | 2026-01-09 01:08:24 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:24.048762 | orchestrator | 2026-01-09 01:08:24 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:24.050894 | orchestrator | 2026-01-09 01:08:24 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:24.053816 | orchestrator | 2026-01-09 01:08:24 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:24.053927 | orchestrator | 2026-01-09 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:27.090420 | orchestrator | 2026-01-09 01:08:27 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:27.092323 | orchestrator | 2026-01-09 01:08:27 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:27.094724 | orchestrator | 2026-01-09 01:08:27 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:27.097988 | orchestrator | 2026-01-09 01:08:27 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:27.098438 | orchestrator | 2026-01-09 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:30.141501 | orchestrator | 2026-01-09 01:08:30 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:30.142940 | orchestrator | 2026-01-09 01:08:30 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:30.144769 | orchestrator | 2026-01-09 01:08:30 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:30.145850 | orchestrator | 2026-01-09 01:08:30 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:30.145878 | orchestrator | 2026-01-09 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:33.190536 | orchestrator | 2026-01-09 01:08:33 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:33.192075 | orchestrator | 2026-01-09 01:08:33 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:33.194403 | orchestrator | 2026-01-09 01:08:33 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:33.196753 | orchestrator | 2026-01-09 01:08:33 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:33.196805 | orchestrator | 2026-01-09 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:36.237009 | orchestrator | 2026-01-09 01:08:36 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:36.240809 | orchestrator | 2026-01-09 01:08:36 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:36.246182 | orchestrator | 2026-01-09 01:08:36 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:36.246243 | orchestrator | 2026-01-09 01:08:36 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:36.246252 | orchestrator | 2026-01-09 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:39.290317 | orchestrator | 2026-01-09 01:08:39 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:39.292615 | orchestrator | 2026-01-09 01:08:39 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:39.294075 | orchestrator | 2026-01-09 01:08:39 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:39.296580 | orchestrator | 2026-01-09 01:08:39 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:39.296624 | orchestrator | 2026-01-09 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:42.336283 | orchestrator | 2026-01-09 01:08:42 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:42.336847 | orchestrator | 2026-01-09 01:08:42 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:42.339827 | orchestrator | 2026-01-09 01:08:42 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:42.340780 | orchestrator | 2026-01-09 01:08:42 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:42.340816 | orchestrator | 2026-01-09 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:45.381574 | orchestrator | 2026-01-09 01:08:45 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:45.385859 | orchestrator | 2026-01-09 01:08:45 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:45.388875 | orchestrator | 2026-01-09 01:08:45 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:45.391843 | orchestrator | 2026-01-09 01:08:45 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:45.391913 | orchestrator | 2026-01-09 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:48.432642 | orchestrator | 2026-01-09 01:08:48 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:48.433583 | orchestrator | 2026-01-09 01:08:48 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:48.434926 | orchestrator | 2026-01-09 01:08:48 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:48.437279 | orchestrator | 2026-01-09 01:08:48 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:48.437305 | orchestrator | 2026-01-09 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:51.479768 | orchestrator | 2026-01-09 01:08:51 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:51.480062 | orchestrator | 2026-01-09 01:08:51 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:51.480848 | orchestrator | 2026-01-09 01:08:51 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:51.481515 | orchestrator | 2026-01-09 01:08:51 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:51.481572 | orchestrator | 2026-01-09 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:54.533572 | orchestrator | 2026-01-09 01:08:54 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:54.536855 | orchestrator | 2026-01-09 01:08:54 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:54.539104 | orchestrator | 2026-01-09 01:08:54 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:54.541127 | orchestrator | 2026-01-09 01:08:54 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:54.541185 | orchestrator | 2026-01-09 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:08:57.582144 | orchestrator | 2026-01-09 01:08:57 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:08:57.582202 | orchestrator | 2026-01-09 01:08:57 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:08:57.583115 | orchestrator | 2026-01-09 01:08:57 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:08:57.586860 | orchestrator | 2026-01-09 01:08:57 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:08:57.587370 | orchestrator | 2026-01-09 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:00.640173 | orchestrator | 2026-01-09 01:09:00 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state STARTED 2026-01-09 01:09:00.642248 | orchestrator | 2026-01-09 01:09:00 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:00.646443 | orchestrator | 2026-01-09 01:09:00 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:00.649346 | orchestrator | 2026-01-09 01:09:00 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:00.649394 | orchestrator | 2026-01-09 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:03.697914 | orchestrator | 2026-01-09 01:09:03 | INFO  | Task c4565ec6-9c81-4fd5-acd1-a1e0f490e40a is in state SUCCESS 2026-01-09 01:09:03.699797 | orchestrator | 2026-01-09 01:09:03.699854 | orchestrator | 2026-01-09 01:09:03.699864 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:09:03.699871 | orchestrator | 2026-01-09 01:09:03.699878 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:09:03.699903 | orchestrator | Friday 09 January 2026 01:06:06 +0000 (0:00:00.248) 0:00:00.248 ******** 2026-01-09 01:09:03.699909 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:09:03.699916 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:09:03.699922 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:09:03.699929 | orchestrator | 2026-01-09 01:09:03.699935 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:09:03.699941 | orchestrator | Friday 09 January 2026 01:06:07 +0000 (0:00:00.335) 0:00:00.583 ******** 2026-01-09 01:09:03.699954 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-09 01:09:03.699961 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-09 01:09:03.699968 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-09 01:09:03.699974 | orchestrator | 2026-01-09 01:09:03.699981 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-09 01:09:03.699987 | orchestrator | 2026-01-09 01:09:03.699998 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-09 01:09:03.700077 | orchestrator | Friday 09 January 2026 01:06:07 +0000 (0:00:00.426) 0:00:01.009 ******** 2026-01-09 01:09:03.700180 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:09:03.700193 | orchestrator | 2026-01-09 01:09:03.700199 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-01-09 01:09:03.700206 | orchestrator | Friday 09 January 2026 01:06:08 +0000 (0:00:00.724) 0:00:01.734 ******** 2026-01-09 01:09:03.700213 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-09 01:09:03.700220 | orchestrator | 2026-01-09 01:09:03.700227 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-01-09 01:09:03.700234 | orchestrator | Friday 09 January 2026 01:06:11 +0000 (0:00:03.400) 0:00:05.134 ******** 2026-01-09 01:09:03.700240 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-09 01:09:03.700247 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-09 01:09:03.700254 | orchestrator | 2026-01-09 01:09:03.700261 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-09 01:09:03.700268 | orchestrator | Friday 09 January 2026 01:06:18 +0000 (0:00:06.717) 0:00:11.852 ******** 2026-01-09 01:09:03.700274 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-09 01:09:03.700281 | orchestrator | 2026-01-09 01:09:03.700288 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-09 01:09:03.700294 | orchestrator | Friday 09 January 2026 01:06:22 +0000 (0:00:03.773) 0:00:15.629 ******** 2026-01-09 01:09:03.700301 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-09 01:09:03.700308 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-09 01:09:03.700315 | orchestrator | 2026-01-09 01:09:03.700322 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-09 01:09:03.700328 | orchestrator | Friday 09 January 2026 01:06:26 +0000 (0:00:04.416) 0:00:20.046 ******** 2026-01-09 01:09:03.700334 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-09 01:09:03.700341 | orchestrator | 2026-01-09 01:09:03.700347 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-01-09 01:09:03.700354 | orchestrator | Friday 09 January 2026 01:06:30 +0000 (0:00:04.308) 0:00:24.355 ******** 2026-01-09 01:09:03.700361 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-09 01:09:03.701166 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-09 01:09:03.701181 | orchestrator | 2026-01-09 01:09:03.701188 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-09 01:09:03.701195 | orchestrator | Friday 09 January 2026 01:06:38 +0000 (0:00:07.446) 0:00:31.802 ******** 2026-01-09 01:09:03.701219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.701262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.701268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.701275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701383 | orchestrator | 2026-01-09 01:09:03.701390 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-09 01:09:03.701396 | orchestrator | Friday 09 January 2026 01:06:41 +0000 (0:00:03.045) 0:00:34.847 ******** 2026-01-09 01:09:03.701403 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:09:03.701424 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:09:03.701443 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:09:03.701447 | orchestrator | 2026-01-09 01:09:03.701451 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-09 01:09:03.701455 | orchestrator | Friday 09 January 2026 01:06:41 +0000 (0:00:00.334) 0:00:35.181 ******** 2026-01-09 01:09:03.701459 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:09:03.701463 | orchestrator | 2026-01-09 01:09:03.701467 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-09 01:09:03.701471 | orchestrator | Friday 09 January 2026 01:06:42 +0000 (0:00:00.737) 0:00:35.918 ******** 2026-01-09 01:09:03.701492 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-09 01:09:03.701496 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-09 01:09:03.701500 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-09 01:09:03.701504 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-09 01:09:03.701508 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-09 01:09:03.701512 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-09 01:09:03.701515 | orchestrator | 2026-01-09 01:09:03.701519 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-09 01:09:03.701523 | orchestrator | Friday 09 January 2026 01:06:44 +0000 (0:00:02.067) 0:00:37.986 ******** 2026-01-09 01:09:03.701528 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-09 01:09:03.701532 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-09 01:09:03.701541 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-09 01:09:03.701548 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-09 01:09:03.701564 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-09 01:09:03.701569 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-09 01:09:03.701573 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-09 01:09:03.701580 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-09 01:09:03.701585 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-09 01:09:03.701600 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-09 01:09:03.701605 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-09 01:09:03.701609 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-09 01:09:03.701616 | orchestrator | 2026-01-09 01:09:03.701619 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-09 01:09:03.701623 | orchestrator | Friday 09 January 2026 01:06:49 +0000 (0:00:05.125) 0:00:43.111 ******** 2026-01-09 01:09:03.701627 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-09 01:09:03.701632 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-09 01:09:03.701635 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-09 01:09:03.701640 | orchestrator | 2026-01-09 01:09:03.701644 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-09 01:09:03.701647 | orchestrator | Friday 09 January 2026 01:06:52 +0000 (0:00:02.360) 0:00:45.471 ******** 2026-01-09 01:09:03.701651 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-01-09 01:09:03.701655 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-01-09 01:09:03.701659 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-01-09 01:09:03.701662 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-01-09 01:09:03.701666 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-01-09 01:09:03.701670 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-01-09 01:09:03.701674 | orchestrator | 2026-01-09 01:09:03.701677 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-09 01:09:03.701681 | orchestrator | Friday 09 January 2026 01:06:57 +0000 (0:00:05.243) 0:00:50.715 ******** 2026-01-09 01:09:03.701685 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-09 01:09:03.701689 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-09 01:09:03.701693 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-09 01:09:03.701697 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-09 01:09:03.701700 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-09 01:09:03.701704 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-09 01:09:03.701708 | orchestrator | 2026-01-09 01:09:03.701714 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-09 01:09:03.701718 | orchestrator | Friday 09 January 2026 01:06:58 +0000 (0:00:01.155) 0:00:51.871 ******** 2026-01-09 01:09:03.701721 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:09:03.701725 | orchestrator | 2026-01-09 01:09:03.701729 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-09 01:09:03.701733 | orchestrator | Friday 09 January 2026 01:06:58 +0000 (0:00:00.170) 0:00:52.042 ******** 2026-01-09 01:09:03.701736 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:09:03.701740 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:09:03.701744 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:09:03.701748 | orchestrator | 2026-01-09 01:09:03.701751 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-09 01:09:03.701755 | orchestrator | Friday 09 January 2026 01:06:58 +0000 (0:00:00.374) 0:00:52.416 ******** 2026-01-09 01:09:03.701759 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:09:03.701763 | orchestrator | 2026-01-09 01:09:03.701767 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-09 01:09:03.701792 | orchestrator | Friday 09 January 2026 01:06:59 +0000 (0:00:00.783) 0:00:53.199 ******** 2026-01-09 01:09:03.701797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.701806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.701810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.701816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.701871 | orchestrator | 2026-01-09 01:09:03.701875 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-09 01:09:03.701879 | orchestrator | Friday 09 January 2026 01:07:04 +0000 (0:00:04.319) 0:00:57.519 ******** 2026-01-09 01:09:03.701883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 01:09:03.701887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.701891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.701897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.701901 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:09:03.701908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 01:09:03.701915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.701919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.701923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.701927 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:09:03.701934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 01:09:03.701943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.701960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.701968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.701974 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:09:03.701981 | orchestrator | 2026-01-09 01:09:03.701986 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-09 01:09:03.701992 | orchestrator | Friday 09 January 2026 01:07:04 +0000 (0:00:00.739) 0:00:58.259 ******** 2026-01-09 01:09:03.701998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 01:09:03.702005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702084 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:09:03.702091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 01:09:03.702097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702115 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:09:03.702124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 01:09:03.702137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702156 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:09:03.702162 | orchestrator | 2026-01-09 01:09:03.702168 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-09 01:09:03.702174 | orchestrator | Friday 09 January 2026 01:07:06 +0000 (0:00:01.164) 0:00:59.423 ******** 2026-01-09 01:09:03.702181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.702225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.702235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.702241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702310 | orchestrator | 2026-01-09 01:09:03.702322 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-09 01:09:03.702329 | orchestrator | Friday 09 January 2026 01:07:09 +0000 (0:00:03.921) 0:01:03.345 ******** 2026-01-09 01:09:03.702336 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-09 01:09:03.702342 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-09 01:09:03.702349 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-09 01:09:03.702355 | orchestrator | 2026-01-09 01:09:03.702361 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-09 01:09:03.702367 | orchestrator | Friday 09 January 2026 01:07:11 +0000 (0:00:01.867) 0:01:05.213 ******** 2026-01-09 01:09:03.702380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.702387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.702395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.702401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702559 | orchestrator | 2026-01-09 01:09:03.702565 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-09 01:09:03.702571 | orchestrator | Friday 09 January 2026 01:07:23 +0000 (0:00:11.598) 0:01:16.811 ******** 2026-01-09 01:09:03.702578 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:09:03.702583 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:09:03.702589 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:09:03.702595 | orchestrator | 2026-01-09 01:09:03.702602 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-09 01:09:03.702613 | orchestrator | Friday 09 January 2026 01:07:25 +0000 (0:00:01.674) 0:01:18.485 ******** 2026-01-09 01:09:03.702620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 01:09:03.702627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 01:09:03.702645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702672 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:09:03.702679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702696 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:09:03.702702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-09 01:09:03.702706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-09 01:09:03.702747 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:09:03.702752 | orchestrator | 2026-01-09 01:09:03.702759 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-09 01:09:03.702765 | orchestrator | Friday 09 January 2026 01:07:25 +0000 (0:00:00.717) 0:01:19.203 ******** 2026-01-09 01:09:03.702771 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:09:03.702777 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:09:03.702782 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:09:03.702788 | orchestrator | 2026-01-09 01:09:03.702793 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-01-09 01:09:03.702800 | orchestrator | Friday 09 January 2026 01:07:26 +0000 (0:00:00.357) 0:01:19.560 ******** 2026-01-09 01:09:03.702811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.702819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.702829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-09 01:09:03.702841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-09 01:09:03.702913 | orchestrator | 2026-01-09 01:09:03.702917 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-09 01:09:03.702921 | orchestrator | Friday 09 January 2026 01:07:29 +0000 (0:00:02.937) 0:01:22.498 ******** 2026-01-09 01:09:03.702925 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:09:03.702929 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:09:03.702932 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:09:03.702936 | orchestrator | 2026-01-09 01:09:03.702940 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-09 01:09:03.702944 | orchestrator | Friday 09 January 2026 01:07:29 +0000 (0:00:00.444) 0:01:22.942 ******** 2026-01-09 01:09:03.702948 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:09:03.702951 | orchestrator | 2026-01-09 01:09:03.702955 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-09 01:09:03.702959 | orchestrator | Friday 09 January 2026 01:07:31 +0000 (0:00:02.341) 0:01:25.283 ******** 2026-01-09 01:09:03.702963 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:09:03.702966 | orchestrator | 2026-01-09 01:09:03.702970 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-09 01:09:03.702974 | orchestrator | Friday 09 January 2026 01:07:34 +0000 (0:00:02.637) 0:01:27.921 ******** 2026-01-09 01:09:03.702978 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:09:03.702982 | orchestrator | 2026-01-09 01:09:03.702985 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-09 01:09:03.702989 | orchestrator | Friday 09 January 2026 01:07:55 +0000 (0:00:21.459) 0:01:49.380 ******** 2026-01-09 01:09:03.702993 | orchestrator | 2026-01-09 01:09:03.702999 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-09 01:09:03.703006 | orchestrator | Friday 09 January 2026 01:07:56 +0000 (0:00:00.141) 0:01:49.522 ******** 2026-01-09 01:09:03.703016 | orchestrator | 2026-01-09 01:09:03.703022 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-09 01:09:03.703029 | orchestrator | Friday 09 January 2026 01:07:56 +0000 (0:00:00.218) 0:01:49.741 ******** 2026-01-09 01:09:03.703036 | orchestrator | 2026-01-09 01:09:03.703043 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-09 01:09:03.703049 | orchestrator | Friday 09 January 2026 01:07:56 +0000 (0:00:00.070) 0:01:49.811 ******** 2026-01-09 01:09:03.703055 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:09:03.703062 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:09:03.703068 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:09:03.703072 | orchestrator | 2026-01-09 01:09:03.703076 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-09 01:09:03.703084 | orchestrator | Friday 09 January 2026 01:08:19 +0000 (0:00:22.819) 0:02:12.631 ******** 2026-01-09 01:09:03.703088 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:09:03.703092 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:09:03.703096 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:09:03.703099 | orchestrator | 2026-01-09 01:09:03.703103 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-09 01:09:03.703107 | orchestrator | Friday 09 January 2026 01:08:25 +0000 (0:00:05.876) 0:02:18.507 ******** 2026-01-09 01:09:03.703111 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:09:03.703119 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:09:03.703122 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:09:03.703126 | orchestrator | 2026-01-09 01:09:03.703130 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-09 01:09:03.703134 | orchestrator | Friday 09 January 2026 01:08:50 +0000 (0:00:25.674) 0:02:44.182 ******** 2026-01-09 01:09:03.703138 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:09:03.703142 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:09:03.703145 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:09:03.703149 | orchestrator | 2026-01-09 01:09:03.703153 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-09 01:09:03.703160 | orchestrator | Friday 09 January 2026 01:09:01 +0000 (0:00:10.480) 0:02:54.663 ******** 2026-01-09 01:09:03.703165 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:09:03.703170 | orchestrator | 2026-01-09 01:09:03.703174 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:09:03.703180 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-09 01:09:03.703185 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 01:09:03.703190 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 01:09:03.703195 | orchestrator | 2026-01-09 01:09:03.703199 | orchestrator | 2026-01-09 01:09:03.703204 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:09:03.703209 | orchestrator | Friday 09 January 2026 01:09:01 +0000 (0:00:00.269) 0:02:54.932 ******** 2026-01-09 01:09:03.703214 | orchestrator | =============================================================================== 2026-01-09 01:09:03.703219 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 25.67s 2026-01-09 01:09:03.703223 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.82s 2026-01-09 01:09:03.703228 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.46s 2026-01-09 01:09:03.703233 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.60s 2026-01-09 01:09:03.703237 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.48s 2026-01-09 01:09:03.703242 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.45s 2026-01-09 01:09:03.703246 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.72s 2026-01-09 01:09:03.703251 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.88s 2026-01-09 01:09:03.703258 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 5.24s 2026-01-09 01:09:03.703264 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.13s 2026-01-09 01:09:03.703271 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.42s 2026-01-09 01:09:03.703277 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.32s 2026-01-09 01:09:03.703283 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.31s 2026-01-09 01:09:03.703289 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.92s 2026-01-09 01:09:03.703295 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.77s 2026-01-09 01:09:03.703301 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.40s 2026-01-09 01:09:03.703307 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.05s 2026-01-09 01:09:03.703313 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.94s 2026-01-09 01:09:03.703320 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.64s 2026-01-09 01:09:03.703331 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.36s 2026-01-09 01:09:03.703340 | orchestrator | 2026-01-09 01:09:03 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:03.703346 | orchestrator | 2026-01-09 01:09:03 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:03.704301 | orchestrator | 2026-01-09 01:09:03 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:03.704328 | orchestrator | 2026-01-09 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:06.759886 | orchestrator | 2026-01-09 01:09:06 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:06.764743 | orchestrator | 2026-01-09 01:09:06 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:06.766929 | orchestrator | 2026-01-09 01:09:06 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:06.767129 | orchestrator | 2026-01-09 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:09.812918 | orchestrator | 2026-01-09 01:09:09 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:09.814149 | orchestrator | 2026-01-09 01:09:09 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:09.815238 | orchestrator | 2026-01-09 01:09:09 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:09.815271 | orchestrator | 2026-01-09 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:12.856178 | orchestrator | 2026-01-09 01:09:12 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:12.860594 | orchestrator | 2026-01-09 01:09:12 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:12.862396 | orchestrator | 2026-01-09 01:09:12 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:12.862715 | orchestrator | 2026-01-09 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:15.914792 | orchestrator | 2026-01-09 01:09:15 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:15.917369 | orchestrator | 2026-01-09 01:09:15 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:15.917937 | orchestrator | 2026-01-09 01:09:15 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:15.917974 | orchestrator | 2026-01-09 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:18.966401 | orchestrator | 2026-01-09 01:09:18 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:18.968360 | orchestrator | 2026-01-09 01:09:18 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:18.970848 | orchestrator | 2026-01-09 01:09:18 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:18.970901 | orchestrator | 2026-01-09 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:22.018781 | orchestrator | 2026-01-09 01:09:22 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:22.020900 | orchestrator | 2026-01-09 01:09:22 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:22.022451 | orchestrator | 2026-01-09 01:09:22 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:22.022524 | orchestrator | 2026-01-09 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:25.072357 | orchestrator | 2026-01-09 01:09:25 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:25.074521 | orchestrator | 2026-01-09 01:09:25 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:25.076179 | orchestrator | 2026-01-09 01:09:25 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:25.076254 | orchestrator | 2026-01-09 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:28.122950 | orchestrator | 2026-01-09 01:09:28 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:28.125274 | orchestrator | 2026-01-09 01:09:28 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:28.127059 | orchestrator | 2026-01-09 01:09:28 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:28.127355 | orchestrator | 2026-01-09 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:31.180447 | orchestrator | 2026-01-09 01:09:31 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:31.182911 | orchestrator | 2026-01-09 01:09:31 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:31.185127 | orchestrator | 2026-01-09 01:09:31 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:31.185168 | orchestrator | 2026-01-09 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:34.235499 | orchestrator | 2026-01-09 01:09:34 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:34.238250 | orchestrator | 2026-01-09 01:09:34 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:34.240918 | orchestrator | 2026-01-09 01:09:34 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:34.240981 | orchestrator | 2026-01-09 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:37.289851 | orchestrator | 2026-01-09 01:09:37 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:37.293122 | orchestrator | 2026-01-09 01:09:37 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:37.295906 | orchestrator | 2026-01-09 01:09:37 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:37.295984 | orchestrator | 2026-01-09 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:40.341161 | orchestrator | 2026-01-09 01:09:40 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:40.342471 | orchestrator | 2026-01-09 01:09:40 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:40.343725 | orchestrator | 2026-01-09 01:09:40 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:40.343935 | orchestrator | 2026-01-09 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:43.391405 | orchestrator | 2026-01-09 01:09:43 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:43.392912 | orchestrator | 2026-01-09 01:09:43 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:43.394766 | orchestrator | 2026-01-09 01:09:43 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:43.394929 | orchestrator | 2026-01-09 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:46.434733 | orchestrator | 2026-01-09 01:09:46 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:46.437208 | orchestrator | 2026-01-09 01:09:46 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:46.439738 | orchestrator | 2026-01-09 01:09:46 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:46.439803 | orchestrator | 2026-01-09 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:49.486765 | orchestrator | 2026-01-09 01:09:49 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:49.487677 | orchestrator | 2026-01-09 01:09:49 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:49.488340 | orchestrator | 2026-01-09 01:09:49 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:49.488757 | orchestrator | 2026-01-09 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:52.535627 | orchestrator | 2026-01-09 01:09:52 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:52.537958 | orchestrator | 2026-01-09 01:09:52 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:52.540013 | orchestrator | 2026-01-09 01:09:52 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:52.540217 | orchestrator | 2026-01-09 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:55.583841 | orchestrator | 2026-01-09 01:09:55 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:55.585344 | orchestrator | 2026-01-09 01:09:55 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:55.587068 | orchestrator | 2026-01-09 01:09:55 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:55.587107 | orchestrator | 2026-01-09 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:09:58.631327 | orchestrator | 2026-01-09 01:09:58 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:09:58.635006 | orchestrator | 2026-01-09 01:09:58 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:09:58.637847 | orchestrator | 2026-01-09 01:09:58 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:09:58.637902 | orchestrator | 2026-01-09 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:01.695378 | orchestrator | 2026-01-09 01:10:01 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:10:01.696971 | orchestrator | 2026-01-09 01:10:01 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:01.698910 | orchestrator | 2026-01-09 01:10:01 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:01.698953 | orchestrator | 2026-01-09 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:04.739850 | orchestrator | 2026-01-09 01:10:04 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:10:04.741575 | orchestrator | 2026-01-09 01:10:04 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:04.742646 | orchestrator | 2026-01-09 01:10:04 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:04.742688 | orchestrator | 2026-01-09 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:07.787118 | orchestrator | 2026-01-09 01:10:07 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:10:07.787251 | orchestrator | 2026-01-09 01:10:07 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:07.788045 | orchestrator | 2026-01-09 01:10:07 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:07.789266 | orchestrator | 2026-01-09 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:10.835497 | orchestrator | 2026-01-09 01:10:10 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:10:10.838984 | orchestrator | 2026-01-09 01:10:10 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:10.841157 | orchestrator | 2026-01-09 01:10:10 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:10.841218 | orchestrator | 2026-01-09 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:13.891311 | orchestrator | 2026-01-09 01:10:13 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:10:13.893451 | orchestrator | 2026-01-09 01:10:13 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:13.895753 | orchestrator | 2026-01-09 01:10:13 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:13.895877 | orchestrator | 2026-01-09 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:16.947879 | orchestrator | 2026-01-09 01:10:16 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:10:16.950857 | orchestrator | 2026-01-09 01:10:16 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:16.953691 | orchestrator | 2026-01-09 01:10:16 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:16.953756 | orchestrator | 2026-01-09 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:20.004794 | orchestrator | 2026-01-09 01:10:20 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:10:20.007042 | orchestrator | 2026-01-09 01:10:20 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:20.008856 | orchestrator | 2026-01-09 01:10:20 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:20.009139 | orchestrator | 2026-01-09 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:23.055817 | orchestrator | 2026-01-09 01:10:23 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:10:23.057841 | orchestrator | 2026-01-09 01:10:23 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:23.058798 | orchestrator | 2026-01-09 01:10:23 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:23.058857 | orchestrator | 2026-01-09 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:26.107832 | orchestrator | 2026-01-09 01:10:26 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:10:26.110970 | orchestrator | 2026-01-09 01:10:26 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:26.113306 | orchestrator | 2026-01-09 01:10:26 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:26.113399 | orchestrator | 2026-01-09 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:29.157909 | orchestrator | 2026-01-09 01:10:29 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:10:29.159738 | orchestrator | 2026-01-09 01:10:29 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:29.161603 | orchestrator | 2026-01-09 01:10:29 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:29.161643 | orchestrator | 2026-01-09 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:32.212595 | orchestrator | 2026-01-09 01:10:32 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state STARTED 2026-01-09 01:10:32.214724 | orchestrator | 2026-01-09 01:10:32 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:32.217057 | orchestrator | 2026-01-09 01:10:32 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:32.217109 | orchestrator | 2026-01-09 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:35.257672 | orchestrator | 2026-01-09 01:10:35 | INFO  | Task 895d94e3-484d-4e9e-9a99-0555d46c2c68 is in state SUCCESS 2026-01-09 01:10:35.258843 | orchestrator | 2026-01-09 01:10:35 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state STARTED 2026-01-09 01:10:35.260070 | orchestrator | 2026-01-09 01:10:35 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:35.261052 | orchestrator | 2026-01-09 01:10:35 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:10:35.261076 | orchestrator | 2026-01-09 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:38.301894 | orchestrator | 2026-01-09 01:10:38 | INFO  | Task 79ba83da-1ce0-400f-9bc4-2920595d99be is in state SUCCESS 2026-01-09 01:10:38.303032 | orchestrator | 2026-01-09 01:10:38.303079 | orchestrator | 2026-01-09 01:10:38.303088 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:10:38.303097 | orchestrator | 2026-01-09 01:10:38.303104 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:10:38.303112 | orchestrator | Friday 09 January 2026 01:07:48 +0000 (0:00:00.222) 0:00:00.222 ******** 2026-01-09 01:10:38.303118 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:10:38.303127 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:10:38.303133 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:10:38.303139 | orchestrator | 2026-01-09 01:10:38.303146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:10:38.303152 | orchestrator | Friday 09 January 2026 01:07:49 +0000 (0:00:00.331) 0:00:00.553 ******** 2026-01-09 01:10:38.303159 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-09 01:10:38.303167 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-09 01:10:38.303173 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-09 01:10:38.303179 | orchestrator | 2026-01-09 01:10:38.303186 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-09 01:10:38.303192 | orchestrator | 2026-01-09 01:10:38.303199 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-09 01:10:38.303205 | orchestrator | Friday 09 January 2026 01:07:49 +0000 (0:00:00.657) 0:00:01.211 ******** 2026-01-09 01:10:38.303212 | orchestrator | 2026-01-09 01:10:38.303219 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-01-09 01:10:38.303225 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:10:38.303231 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:10:38.303238 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:10:38.303244 | orchestrator | 2026-01-09 01:10:38.303250 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:10:38.303258 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:10:38.303267 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:10:38.303366 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:10:38.303374 | orchestrator | 2026-01-09 01:10:38.303381 | orchestrator | 2026-01-09 01:10:38.303387 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:10:38.303394 | orchestrator | Friday 09 January 2026 01:10:33 +0000 (0:02:43.948) 0:02:45.160 ******** 2026-01-09 01:10:38.303400 | orchestrator | =============================================================================== 2026-01-09 01:10:38.303460 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 163.95s 2026-01-09 01:10:38.303467 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2026-01-09 01:10:38.303473 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-01-09 01:10:38.303480 | orchestrator | 2026-01-09 01:10:38.303486 | orchestrator | 2026-01-09 01:10:38.303492 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:10:38.303499 | orchestrator | 2026-01-09 01:10:38.303505 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:10:38.303511 | orchestrator | Friday 09 January 2026 01:08:16 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-01-09 01:10:38.303518 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:10:38.303525 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:10:38.303531 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:10:38.303538 | orchestrator | 2026-01-09 01:10:38.303544 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:10:38.303623 | orchestrator | Friday 09 January 2026 01:08:16 +0000 (0:00:00.287) 0:00:00.558 ******** 2026-01-09 01:10:38.303631 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-09 01:10:38.303638 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-09 01:10:38.303659 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-09 01:10:38.303666 | orchestrator | 2026-01-09 01:10:38.303673 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-09 01:10:38.303680 | orchestrator | 2026-01-09 01:10:38.303688 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-09 01:10:38.303694 | orchestrator | Friday 09 January 2026 01:08:17 +0000 (0:00:00.443) 0:00:01.002 ******** 2026-01-09 01:10:38.303701 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:10:38.303709 | orchestrator | 2026-01-09 01:10:38.303716 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-09 01:10:38.303723 | orchestrator | Friday 09 January 2026 01:08:17 +0000 (0:00:00.521) 0:00:01.523 ******** 2026-01-09 01:10:38.303734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.303760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.303777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.303785 | orchestrator | 2026-01-09 01:10:38.303792 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-09 01:10:38.303798 | orchestrator | Friday 09 January 2026 01:08:18 +0000 (0:00:00.814) 0:00:02.338 ******** 2026-01-09 01:10:38.303805 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-09 01:10:38.303813 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-09 01:10:38.303820 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 01:10:38.303827 | orchestrator | 2026-01-09 01:10:38.303834 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-09 01:10:38.303841 | orchestrator | Friday 09 January 2026 01:08:19 +0000 (0:00:00.979) 0:00:03.317 ******** 2026-01-09 01:10:38.303847 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:10:38.303854 | orchestrator | 2026-01-09 01:10:38.303861 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-09 01:10:38.303869 | orchestrator | Friday 09 January 2026 01:08:20 +0000 (0:00:00.824) 0:00:04.142 ******** 2026-01-09 01:10:38.303876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.303889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.303896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.303903 | orchestrator | 2026-01-09 01:10:38.303923 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-09 01:10:38.303930 | orchestrator | Friday 09 January 2026 01:08:22 +0000 (0:00:01.675) 0:00:05.817 ******** 2026-01-09 01:10:38.303937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-09 01:10:38.303944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-09 01:10:38.303952 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:10:38.303959 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:10:38.303966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-09 01:10:38.303973 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:10:38.303980 | orchestrator | 2026-01-09 01:10:38.303986 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-09 01:10:38.303993 | orchestrator | Friday 09 January 2026 01:08:22 +0000 (0:00:00.412) 0:00:06.230 ******** 2026-01-09 01:10:38.304005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-09 01:10:38.304012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-09 01:10:38.304025 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:10:38.304031 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:10:38.304045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-09 01:10:38.304051 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:10:38.304058 | orchestrator | 2026-01-09 01:10:38.304064 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-09 01:10:38.304071 | orchestrator | Friday 09 January 2026 01:08:23 +0000 (0:00:00.818) 0:00:07.049 ******** 2026-01-09 01:10:38.304078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.304085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.304092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.304100 | orchestrator | 2026-01-09 01:10:38.304106 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-09 01:10:38.304113 | orchestrator | Friday 09 January 2026 01:08:24 +0000 (0:00:01.389) 0:00:08.438 ******** 2026-01-09 01:10:38.304123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.304135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.304146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.304152 | orchestrator | 2026-01-09 01:10:38.304158 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-09 01:10:38.304164 | orchestrator | Friday 09 January 2026 01:08:25 +0000 (0:00:01.306) 0:00:09.745 ******** 2026-01-09 01:10:38.304171 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:10:38.304177 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:10:38.304183 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:10:38.304189 | orchestrator | 2026-01-09 01:10:38.304196 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-09 01:10:38.304202 | orchestrator | Friday 09 January 2026 01:08:26 +0000 (0:00:00.595) 0:00:10.340 ******** 2026-01-09 01:10:38.304208 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-09 01:10:38.304215 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-09 01:10:38.304220 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-09 01:10:38.304226 | orchestrator | 2026-01-09 01:10:38.304232 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-09 01:10:38.304239 | orchestrator | Friday 09 January 2026 01:08:27 +0000 (0:00:01.321) 0:00:11.662 ******** 2026-01-09 01:10:38.304246 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-09 01:10:38.304252 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-09 01:10:38.304258 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-09 01:10:38.304265 | orchestrator | 2026-01-09 01:10:38.304271 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-09 01:10:38.304277 | orchestrator | Friday 09 January 2026 01:08:29 +0000 (0:00:01.282) 0:00:12.944 ******** 2026-01-09 01:10:38.304283 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 01:10:38.304290 | orchestrator | 2026-01-09 01:10:38.304296 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-09 01:10:38.304302 | orchestrator | Friday 09 January 2026 01:08:29 +0000 (0:00:00.809) 0:00:13.753 ******** 2026-01-09 01:10:38.304308 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-09 01:10:38.304314 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-09 01:10:38.304324 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:10:38.304330 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:10:38.304336 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:10:38.304342 | orchestrator | 2026-01-09 01:10:38.304348 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-09 01:10:38.304354 | orchestrator | Friday 09 January 2026 01:08:30 +0000 (0:00:00.759) 0:00:14.513 ******** 2026-01-09 01:10:38.304361 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:10:38.304370 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:10:38.304377 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:10:38.304383 | orchestrator | 2026-01-09 01:10:38.304390 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-09 01:10:38.304396 | orchestrator | Friday 09 January 2026 01:08:31 +0000 (0:00:00.525) 0:00:15.038 ******** 2026-01-09 01:10:38.304528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1084513, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0609004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1084513, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0609004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1084513, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0609004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084585, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0758286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084585, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0758286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084585, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0758286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084528, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.063632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084528, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.063632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084528, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.063632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084589, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0777502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084589, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0777502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084589, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0777502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084552, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0694766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084552, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0694766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084552, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0694766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084571, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0738943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084571, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0738943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084571, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0738943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084512, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.058812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.304734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084512, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.058812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084512, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.058812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084520, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0622506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084520, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0622506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084520, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0622506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084533, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0647473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084533, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0647473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084533, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0647473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084563, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0704565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084563, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0704565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084563, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0704565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084583, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0752747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084583, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0752747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084583, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0752747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084523, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0631025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084523, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0631025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084523, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0631025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084568, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0724661, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084568, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0724661, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084568, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0724661, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084557, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.070061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084557, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.070061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084557, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.070061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084545, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0684583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084545, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0684583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084545, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0684583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084542, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0667148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084542, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0667148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084542, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0667148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084566, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0714567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084566, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0714567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084566, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0714567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084537, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0657446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084537, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0657446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084537, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0657446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084575, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0748312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084575, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0748312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084575, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0748312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084767, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1176426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084767, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1176426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084767, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1176426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084635, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0912323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084635, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0912323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084635, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0912323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084612, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.081096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084612, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.081096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084612, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.081096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084674, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.094978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084674, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.094978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084674, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.094978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084601, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0785882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084601, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0785882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084601, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0785882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084719, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.106292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084719, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.106292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084719, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.106292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084677, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.101899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084677, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.101899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084677, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.101899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084733, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1094568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084733, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1094568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084733, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1094568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084763, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.116709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084763, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.116709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084763, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.116709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084716, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1048732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084716, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1048732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084716, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1048732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084666, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.093743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084666, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.093743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084666, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.093743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084631, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0853257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084631, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0853257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084631, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0853257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084661, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0914567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084661, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0914567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084661, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0914567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084615, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0845988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084615, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0845988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084615, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0845988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.305998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084670, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0943925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084670, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0943925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084670, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0943925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084755, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1154568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084755, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1154568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084755, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1154568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084742, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1115546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084742, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1115546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084604, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.078866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084742, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1115546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084604, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.078866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084608, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0804496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084608, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0804496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084604, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.078866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084709, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1036565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084709, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1036565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084608, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.0804496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084738, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.110901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084738, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.110901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084709, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.1036565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084738, 'dev': 103, 'nlink': 1, 'atime': 1767916962.0, 'mtime': 1767916962.0, 'ctime': 1767917799.110901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-09 01:10:38.306213 | orchestrator | 2026-01-09 01:10:38.306218 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-09 01:10:38.306224 | orchestrator | Friday 09 January 2026 01:09:08 +0000 (0:00:36.781) 0:00:51.820 ******** 2026-01-09 01:10:38.306232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.306237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.306241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-09 01:10:38.306246 | orchestrator | 2026-01-09 01:10:38.306250 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-09 01:10:38.306254 | orchestrator | Friday 09 January 2026 01:09:08 +0000 (0:00:00.929) 0:00:52.749 ******** 2026-01-09 01:10:38.306259 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:10:38.306263 | orchestrator | 2026-01-09 01:10:38.306268 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-09 01:10:38.306272 | orchestrator | Friday 09 January 2026 01:09:11 +0000 (0:00:02.205) 0:00:54.955 ******** 2026-01-09 01:10:38.306276 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:10:38.306281 | orchestrator | 2026-01-09 01:10:38.306285 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-09 01:10:38.306290 | orchestrator | Friday 09 January 2026 01:09:13 +0000 (0:00:02.433) 0:00:57.389 ******** 2026-01-09 01:10:38.306294 | orchestrator | 2026-01-09 01:10:38.306299 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-09 01:10:38.306303 | orchestrator | Friday 09 January 2026 01:09:13 +0000 (0:00:00.065) 0:00:57.455 ******** 2026-01-09 01:10:38.306312 | orchestrator | 2026-01-09 01:10:38.306316 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-09 01:10:38.306321 | orchestrator | Friday 09 January 2026 01:09:13 +0000 (0:00:00.065) 0:00:57.520 ******** 2026-01-09 01:10:38.306328 | orchestrator | 2026-01-09 01:10:38.306334 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-09 01:10:38.306340 | orchestrator | Friday 09 January 2026 01:09:14 +0000 (0:00:00.245) 0:00:57.766 ******** 2026-01-09 01:10:38.306346 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:10:38.306352 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:10:38.306358 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:10:38.306365 | orchestrator | 2026-01-09 01:10:38.306371 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-09 01:10:38.306377 | orchestrator | Friday 09 January 2026 01:09:15 +0000 (0:00:01.789) 0:00:59.555 ******** 2026-01-09 01:10:38.306383 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:10:38.306391 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:10:38.306397 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-09 01:10:38.306420 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-09 01:10:38.306427 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-09 01:10:38.306433 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-09 01:10:38.306439 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:10:38.306446 | orchestrator | 2026-01-09 01:10:38.306451 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-09 01:10:38.306456 | orchestrator | Friday 09 January 2026 01:10:07 +0000 (0:00:51.598) 0:01:51.154 ******** 2026-01-09 01:10:38.306462 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:10:38.306468 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:10:38.306474 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:10:38.306480 | orchestrator | 2026-01-09 01:10:38.306486 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-09 01:10:38.306492 | orchestrator | Friday 09 January 2026 01:10:31 +0000 (0:00:23.823) 0:02:14.978 ******** 2026-01-09 01:10:38.306498 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:10:38.306505 | orchestrator | 2026-01-09 01:10:38.306511 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-09 01:10:38.306518 | orchestrator | Friday 09 January 2026 01:10:33 +0000 (0:00:02.084) 0:02:17.062 ******** 2026-01-09 01:10:38.306524 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:10:38.306531 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:10:38.306537 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:10:38.306543 | orchestrator | 2026-01-09 01:10:38.306553 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-09 01:10:38.306559 | orchestrator | Friday 09 January 2026 01:10:33 +0000 (0:00:00.485) 0:02:17.547 ******** 2026-01-09 01:10:38.306568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-09 01:10:38.306577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-09 01:10:38.306584 | orchestrator | 2026-01-09 01:10:38.306590 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-09 01:10:38.306596 | orchestrator | Friday 09 January 2026 01:10:36 +0000 (0:00:02.538) 0:02:20.086 ******** 2026-01-09 01:10:38.306607 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:10:38.306614 | orchestrator | 2026-01-09 01:10:38.306620 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:10:38.306627 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 01:10:38.306635 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 01:10:38.306641 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 01:10:38.306647 | orchestrator | 2026-01-09 01:10:38.306653 | orchestrator | 2026-01-09 01:10:38.306659 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:10:38.306665 | orchestrator | Friday 09 January 2026 01:10:36 +0000 (0:00:00.263) 0:02:20.350 ******** 2026-01-09 01:10:38.306671 | orchestrator | =============================================================================== 2026-01-09 01:10:38.306678 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.60s 2026-01-09 01:10:38.306684 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.78s 2026-01-09 01:10:38.306690 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 23.82s 2026-01-09 01:10:38.306696 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.54s 2026-01-09 01:10:38.306702 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.43s 2026-01-09 01:10:38.306708 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.21s 2026-01-09 01:10:38.306715 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.08s 2026-01-09 01:10:38.306721 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.79s 2026-01-09 01:10:38.306727 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.68s 2026-01-09 01:10:38.306733 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.39s 2026-01-09 01:10:38.306739 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.32s 2026-01-09 01:10:38.306745 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.31s 2026-01-09 01:10:38.306758 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.28s 2026-01-09 01:10:38.306764 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.98s 2026-01-09 01:10:38.306770 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.93s 2026-01-09 01:10:38.306776 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.82s 2026-01-09 01:10:38.306781 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.82s 2026-01-09 01:10:38.306787 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.81s 2026-01-09 01:10:38.306794 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.81s 2026-01-09 01:10:38.306800 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.76s 2026-01-09 01:10:38.306807 | orchestrator | 2026-01-09 01:10:38 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:38.306813 | orchestrator | 2026-01-09 01:10:38 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:10:38.306820 | orchestrator | 2026-01-09 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:41.350444 | orchestrator | 2026-01-09 01:10:41 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:41.352272 | orchestrator | 2026-01-09 01:10:41 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:10:41.352374 | orchestrator | 2026-01-09 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:44.400802 | orchestrator | 2026-01-09 01:10:44 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:44.402478 | orchestrator | 2026-01-09 01:10:44 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:10:44.402628 | orchestrator | 2026-01-09 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:47.443156 | orchestrator | 2026-01-09 01:10:47 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:47.443299 | orchestrator | 2026-01-09 01:10:47 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:10:47.443308 | orchestrator | 2026-01-09 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:50.492237 | orchestrator | 2026-01-09 01:10:50 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:50.494224 | orchestrator | 2026-01-09 01:10:50 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:10:50.494278 | orchestrator | 2026-01-09 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:53.540646 | orchestrator | 2026-01-09 01:10:53 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:53.542416 | orchestrator | 2026-01-09 01:10:53 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:10:53.542514 | orchestrator | 2026-01-09 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:56.580962 | orchestrator | 2026-01-09 01:10:56 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:56.581160 | orchestrator | 2026-01-09 01:10:56 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:10:56.581176 | orchestrator | 2026-01-09 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:10:59.635762 | orchestrator | 2026-01-09 01:10:59 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:10:59.643747 | orchestrator | 2026-01-09 01:10:59 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:10:59.643827 | orchestrator | 2026-01-09 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:02.686080 | orchestrator | 2026-01-09 01:11:02 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:02.687104 | orchestrator | 2026-01-09 01:11:02 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:02.687141 | orchestrator | 2026-01-09 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:05.733873 | orchestrator | 2026-01-09 01:11:05 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:05.736508 | orchestrator | 2026-01-09 01:11:05 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:05.736554 | orchestrator | 2026-01-09 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:08.784184 | orchestrator | 2026-01-09 01:11:08 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:08.786864 | orchestrator | 2026-01-09 01:11:08 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:08.786910 | orchestrator | 2026-01-09 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:11.838574 | orchestrator | 2026-01-09 01:11:11 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:11.841186 | orchestrator | 2026-01-09 01:11:11 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:11.841242 | orchestrator | 2026-01-09 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:14.890243 | orchestrator | 2026-01-09 01:11:14 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:14.890613 | orchestrator | 2026-01-09 01:11:14 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:14.890687 | orchestrator | 2026-01-09 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:17.936258 | orchestrator | 2026-01-09 01:11:17 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:17.938661 | orchestrator | 2026-01-09 01:11:17 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:17.938715 | orchestrator | 2026-01-09 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:20.977708 | orchestrator | 2026-01-09 01:11:20 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:20.977978 | orchestrator | 2026-01-09 01:11:20 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:20.977992 | orchestrator | 2026-01-09 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:24.039042 | orchestrator | 2026-01-09 01:11:24 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:24.039129 | orchestrator | 2026-01-09 01:11:24 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:24.039138 | orchestrator | 2026-01-09 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:27.069464 | orchestrator | 2026-01-09 01:11:27 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:27.071898 | orchestrator | 2026-01-09 01:11:27 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:27.071982 | orchestrator | 2026-01-09 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:30.126759 | orchestrator | 2026-01-09 01:11:30 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:30.128773 | orchestrator | 2026-01-09 01:11:30 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:30.128829 | orchestrator | 2026-01-09 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:33.165329 | orchestrator | 2026-01-09 01:11:33 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:33.165841 | orchestrator | 2026-01-09 01:11:33 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:33.165858 | orchestrator | 2026-01-09 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:36.205724 | orchestrator | 2026-01-09 01:11:36 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:36.209067 | orchestrator | 2026-01-09 01:11:36 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:36.209756 | orchestrator | 2026-01-09 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:39.257556 | orchestrator | 2026-01-09 01:11:39 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:39.258108 | orchestrator | 2026-01-09 01:11:39 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:39.258208 | orchestrator | 2026-01-09 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:42.309876 | orchestrator | 2026-01-09 01:11:42 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:42.309957 | orchestrator | 2026-01-09 01:11:42 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:42.309970 | orchestrator | 2026-01-09 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:45.352031 | orchestrator | 2026-01-09 01:11:45 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:45.352607 | orchestrator | 2026-01-09 01:11:45 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:45.352638 | orchestrator | 2026-01-09 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:48.392025 | orchestrator | 2026-01-09 01:11:48 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:48.393881 | orchestrator | 2026-01-09 01:11:48 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:48.394011 | orchestrator | 2026-01-09 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:51.442907 | orchestrator | 2026-01-09 01:11:51 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:51.443009 | orchestrator | 2026-01-09 01:11:51 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:51.443022 | orchestrator | 2026-01-09 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:54.487765 | orchestrator | 2026-01-09 01:11:54 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:54.489522 | orchestrator | 2026-01-09 01:11:54 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:54.489562 | orchestrator | 2026-01-09 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:11:57.533817 | orchestrator | 2026-01-09 01:11:57 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:11:57.537788 | orchestrator | 2026-01-09 01:11:57 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:11:57.537888 | orchestrator | 2026-01-09 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:00.590261 | orchestrator | 2026-01-09 01:12:00 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:00.591135 | orchestrator | 2026-01-09 01:12:00 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:00.591165 | orchestrator | 2026-01-09 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:03.648768 | orchestrator | 2026-01-09 01:12:03 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:03.649084 | orchestrator | 2026-01-09 01:12:03 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:03.649261 | orchestrator | 2026-01-09 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:06.693973 | orchestrator | 2026-01-09 01:12:06 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:06.697759 | orchestrator | 2026-01-09 01:12:06 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:06.697850 | orchestrator | 2026-01-09 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:09.757859 | orchestrator | 2026-01-09 01:12:09 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:09.761867 | orchestrator | 2026-01-09 01:12:09 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:09.761993 | orchestrator | 2026-01-09 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:12.806729 | orchestrator | 2026-01-09 01:12:12 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:12.808016 | orchestrator | 2026-01-09 01:12:12 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:12.808071 | orchestrator | 2026-01-09 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:15.850535 | orchestrator | 2026-01-09 01:12:15 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:15.852255 | orchestrator | 2026-01-09 01:12:15 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:15.852305 | orchestrator | 2026-01-09 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:18.899471 | orchestrator | 2026-01-09 01:12:18 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:18.901597 | orchestrator | 2026-01-09 01:12:18 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:18.901643 | orchestrator | 2026-01-09 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:21.950681 | orchestrator | 2026-01-09 01:12:21 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:21.954170 | orchestrator | 2026-01-09 01:12:21 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:21.954314 | orchestrator | 2026-01-09 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:25.011670 | orchestrator | 2026-01-09 01:12:25 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:25.013585 | orchestrator | 2026-01-09 01:12:25 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:25.013763 | orchestrator | 2026-01-09 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:28.059670 | orchestrator | 2026-01-09 01:12:28 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:28.092860 | orchestrator | 2026-01-09 01:12:28 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:28.092946 | orchestrator | 2026-01-09 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:31.116850 | orchestrator | 2026-01-09 01:12:31 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:31.121891 | orchestrator | 2026-01-09 01:12:31 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:31.121997 | orchestrator | 2026-01-09 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:34.163489 | orchestrator | 2026-01-09 01:12:34 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:34.164204 | orchestrator | 2026-01-09 01:12:34 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:34.164447 | orchestrator | 2026-01-09 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:37.208534 | orchestrator | 2026-01-09 01:12:37 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:37.210400 | orchestrator | 2026-01-09 01:12:37 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:37.210436 | orchestrator | 2026-01-09 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:40.259152 | orchestrator | 2026-01-09 01:12:40 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:40.263699 | orchestrator | 2026-01-09 01:12:40 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:40.263765 | orchestrator | 2026-01-09 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:43.310312 | orchestrator | 2026-01-09 01:12:43 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:43.314235 | orchestrator | 2026-01-09 01:12:43 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:43.314296 | orchestrator | 2026-01-09 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:46.357566 | orchestrator | 2026-01-09 01:12:46 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:46.360457 | orchestrator | 2026-01-09 01:12:46 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:46.360593 | orchestrator | 2026-01-09 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:49.412063 | orchestrator | 2026-01-09 01:12:49 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:49.414530 | orchestrator | 2026-01-09 01:12:49 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:49.414604 | orchestrator | 2026-01-09 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:52.452303 | orchestrator | 2026-01-09 01:12:52 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:52.453705 | orchestrator | 2026-01-09 01:12:52 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:52.453771 | orchestrator | 2026-01-09 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:55.497440 | orchestrator | 2026-01-09 01:12:55 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:55.499056 | orchestrator | 2026-01-09 01:12:55 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:55.499447 | orchestrator | 2026-01-09 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:12:58.543922 | orchestrator | 2026-01-09 01:12:58 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:12:58.544752 | orchestrator | 2026-01-09 01:12:58 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:12:58.544795 | orchestrator | 2026-01-09 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:01.593194 | orchestrator | 2026-01-09 01:13:01 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:01.595638 | orchestrator | 2026-01-09 01:13:01 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:01.595681 | orchestrator | 2026-01-09 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:04.636910 | orchestrator | 2026-01-09 01:13:04 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:04.638376 | orchestrator | 2026-01-09 01:13:04 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:04.638420 | orchestrator | 2026-01-09 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:07.689822 | orchestrator | 2026-01-09 01:13:07 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:07.691805 | orchestrator | 2026-01-09 01:13:07 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:07.691914 | orchestrator | 2026-01-09 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:10.734128 | orchestrator | 2026-01-09 01:13:10 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:10.735906 | orchestrator | 2026-01-09 01:13:10 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:10.736036 | orchestrator | 2026-01-09 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:13.783735 | orchestrator | 2026-01-09 01:13:13 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:13.784322 | orchestrator | 2026-01-09 01:13:13 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:13.784513 | orchestrator | 2026-01-09 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:16.834479 | orchestrator | 2026-01-09 01:13:16 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:16.836391 | orchestrator | 2026-01-09 01:13:16 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:16.836442 | orchestrator | 2026-01-09 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:19.877292 | orchestrator | 2026-01-09 01:13:19 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:19.878718 | orchestrator | 2026-01-09 01:13:19 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:19.879330 | orchestrator | 2026-01-09 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:22.931345 | orchestrator | 2026-01-09 01:13:22 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:22.932423 | orchestrator | 2026-01-09 01:13:22 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:22.932457 | orchestrator | 2026-01-09 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:25.989414 | orchestrator | 2026-01-09 01:13:25 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:25.991189 | orchestrator | 2026-01-09 01:13:25 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:25.991223 | orchestrator | 2026-01-09 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:29.053670 | orchestrator | 2026-01-09 01:13:29 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:29.053772 | orchestrator | 2026-01-09 01:13:29 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:29.053783 | orchestrator | 2026-01-09 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:32.089698 | orchestrator | 2026-01-09 01:13:32 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:32.092369 | orchestrator | 2026-01-09 01:13:32 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:32.092647 | orchestrator | 2026-01-09 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:35.138830 | orchestrator | 2026-01-09 01:13:35 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:35.140079 | orchestrator | 2026-01-09 01:13:35 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:35.140118 | orchestrator | 2026-01-09 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:38.193755 | orchestrator | 2026-01-09 01:13:38 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:38.194419 | orchestrator | 2026-01-09 01:13:38 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:38.194525 | orchestrator | 2026-01-09 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:41.238933 | orchestrator | 2026-01-09 01:13:41 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:41.240549 | orchestrator | 2026-01-09 01:13:41 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:41.240610 | orchestrator | 2026-01-09 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:44.282962 | orchestrator | 2026-01-09 01:13:44 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:44.284811 | orchestrator | 2026-01-09 01:13:44 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:44.284883 | orchestrator | 2026-01-09 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:47.322528 | orchestrator | 2026-01-09 01:13:47 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:47.324536 | orchestrator | 2026-01-09 01:13:47 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:47.325371 | orchestrator | 2026-01-09 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:50.364796 | orchestrator | 2026-01-09 01:13:50 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:50.365478 | orchestrator | 2026-01-09 01:13:50 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:50.365501 | orchestrator | 2026-01-09 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:53.389333 | orchestrator | 2026-01-09 01:13:53 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:53.390723 | orchestrator | 2026-01-09 01:13:53 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:53.390786 | orchestrator | 2026-01-09 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:56.422838 | orchestrator | 2026-01-09 01:13:56 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:56.424435 | orchestrator | 2026-01-09 01:13:56 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:56.424574 | orchestrator | 2026-01-09 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:13:59.469429 | orchestrator | 2026-01-09 01:13:59 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:13:59.470492 | orchestrator | 2026-01-09 01:13:59 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:13:59.470540 | orchestrator | 2026-01-09 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:02.518489 | orchestrator | 2026-01-09 01:14:02 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:02.519602 | orchestrator | 2026-01-09 01:14:02 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:02.519647 | orchestrator | 2026-01-09 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:05.562310 | orchestrator | 2026-01-09 01:14:05 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:05.563407 | orchestrator | 2026-01-09 01:14:05 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:05.563614 | orchestrator | 2026-01-09 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:08.616812 | orchestrator | 2026-01-09 01:14:08 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:08.617853 | orchestrator | 2026-01-09 01:14:08 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:08.617902 | orchestrator | 2026-01-09 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:11.649698 | orchestrator | 2026-01-09 01:14:11 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:11.651421 | orchestrator | 2026-01-09 01:14:11 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:11.651461 | orchestrator | 2026-01-09 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:14.680032 | orchestrator | 2026-01-09 01:14:14 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:14.680372 | orchestrator | 2026-01-09 01:14:14 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:14.680391 | orchestrator | 2026-01-09 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:17.706514 | orchestrator | 2026-01-09 01:14:17 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:17.706988 | orchestrator | 2026-01-09 01:14:17 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:17.707015 | orchestrator | 2026-01-09 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:20.737726 | orchestrator | 2026-01-09 01:14:20 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:20.738096 | orchestrator | 2026-01-09 01:14:20 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:20.738159 | orchestrator | 2026-01-09 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:23.777158 | orchestrator | 2026-01-09 01:14:23 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:23.779096 | orchestrator | 2026-01-09 01:14:23 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:23.779149 | orchestrator | 2026-01-09 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:26.827237 | orchestrator | 2026-01-09 01:14:26 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:26.829711 | orchestrator | 2026-01-09 01:14:26 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:26.829803 | orchestrator | 2026-01-09 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:29.869953 | orchestrator | 2026-01-09 01:14:29 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:29.875313 | orchestrator | 2026-01-09 01:14:29 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:29.875428 | orchestrator | 2026-01-09 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:32.931553 | orchestrator | 2026-01-09 01:14:32 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:32.933310 | orchestrator | 2026-01-09 01:14:32 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:32.933370 | orchestrator | 2026-01-09 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:35.985999 | orchestrator | 2026-01-09 01:14:35 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:35.987089 | orchestrator | 2026-01-09 01:14:35 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:35.987161 | orchestrator | 2026-01-09 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:39.042594 | orchestrator | 2026-01-09 01:14:39 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:39.045063 | orchestrator | 2026-01-09 01:14:39 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:39.045133 | orchestrator | 2026-01-09 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:42.099155 | orchestrator | 2026-01-09 01:14:42 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:42.102159 | orchestrator | 2026-01-09 01:14:42 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:42.102667 | orchestrator | 2026-01-09 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:45.153001 | orchestrator | 2026-01-09 01:14:45 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:45.155658 | orchestrator | 2026-01-09 01:14:45 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:45.155723 | orchestrator | 2026-01-09 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:48.208354 | orchestrator | 2026-01-09 01:14:48 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:48.210638 | orchestrator | 2026-01-09 01:14:48 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:48.210742 | orchestrator | 2026-01-09 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:51.259406 | orchestrator | 2026-01-09 01:14:51 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:51.261325 | orchestrator | 2026-01-09 01:14:51 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:51.261374 | orchestrator | 2026-01-09 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:54.309232 | orchestrator | 2026-01-09 01:14:54 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:54.311183 | orchestrator | 2026-01-09 01:14:54 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:54.311233 | orchestrator | 2026-01-09 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:14:57.366993 | orchestrator | 2026-01-09 01:14:57 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:14:57.371169 | orchestrator | 2026-01-09 01:14:57 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:14:57.371234 | orchestrator | 2026-01-09 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:00.413251 | orchestrator | 2026-01-09 01:15:00 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state STARTED 2026-01-09 01:15:00.414337 | orchestrator | 2026-01-09 01:15:00 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:15:00.414391 | orchestrator | 2026-01-09 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:03.477017 | orchestrator | 2026-01-09 01:15:03 | INFO  | Task 491c3159-3f1c-4de2-87e9-b32e8e7b6dce is in state SUCCESS 2026-01-09 01:15:03.479164 | orchestrator | 2026-01-09 01:15:03.479215 | orchestrator | 2026-01-09 01:15:03.479224 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:15:03.479233 | orchestrator | 2026-01-09 01:15:03.479239 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-09 01:15:03.479246 | orchestrator | Friday 09 January 2026 01:06:08 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-01-09 01:15:03.479254 | orchestrator | changed: [testbed-manager] 2026-01-09 01:15:03.479262 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.479310 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:03.479335 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:03.479392 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.479400 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.479406 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.479412 | orchestrator | 2026-01-09 01:15:03.479419 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:15:03.479424 | orchestrator | Friday 09 January 2026 01:06:09 +0000 (0:00:01.072) 0:00:01.343 ******** 2026-01-09 01:15:03.479427 | orchestrator | changed: [testbed-manager] 2026-01-09 01:15:03.479431 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.479462 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:03.479466 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:03.479481 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.479485 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.479489 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.479492 | orchestrator | 2026-01-09 01:15:03.479496 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:15:03.479500 | orchestrator | Friday 09 January 2026 01:06:11 +0000 (0:00:01.180) 0:00:02.523 ******** 2026-01-09 01:15:03.479504 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-09 01:15:03.479509 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-09 01:15:03.479515 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-09 01:15:03.479521 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-09 01:15:03.479528 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-09 01:15:03.479534 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-09 01:15:03.479540 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-09 01:15:03.479546 | orchestrator | 2026-01-09 01:15:03.479553 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-09 01:15:03.479558 | orchestrator | 2026-01-09 01:15:03.479562 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-09 01:15:03.479566 | orchestrator | Friday 09 January 2026 01:06:12 +0000 (0:00:00.985) 0:00:03.508 ******** 2026-01-09 01:15:03.479570 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:03.479574 | orchestrator | 2026-01-09 01:15:03.479578 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-09 01:15:03.479581 | orchestrator | Friday 09 January 2026 01:06:12 +0000 (0:00:00.672) 0:00:04.181 ******** 2026-01-09 01:15:03.479586 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-09 01:15:03.479590 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-09 01:15:03.479594 | orchestrator | 2026-01-09 01:15:03.479598 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-09 01:15:03.479601 | orchestrator | Friday 09 January 2026 01:06:16 +0000 (0:00:04.290) 0:00:08.472 ******** 2026-01-09 01:15:03.479606 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-09 01:15:03.479613 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-09 01:15:03.479619 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.479625 | orchestrator | 2026-01-09 01:15:03.479632 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-09 01:15:03.479638 | orchestrator | Friday 09 January 2026 01:06:21 +0000 (0:00:04.249) 0:00:12.722 ******** 2026-01-09 01:15:03.479644 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.479651 | orchestrator | 2026-01-09 01:15:03.479656 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-09 01:15:03.479660 | orchestrator | Friday 09 January 2026 01:06:22 +0000 (0:00:00.792) 0:00:13.514 ******** 2026-01-09 01:15:03.479671 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.479675 | orchestrator | 2026-01-09 01:15:03.479680 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-09 01:15:03.479687 | orchestrator | Friday 09 January 2026 01:06:24 +0000 (0:00:02.367) 0:00:15.881 ******** 2026-01-09 01:15:03.479699 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.479706 | orchestrator | 2026-01-09 01:15:03.479712 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-09 01:15:03.479724 | orchestrator | Friday 09 January 2026 01:06:30 +0000 (0:00:06.006) 0:00:21.887 ******** 2026-01-09 01:15:03.479731 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.479737 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.479744 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.479750 | orchestrator | 2026-01-09 01:15:03.479757 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-09 01:15:03.479763 | orchestrator | Friday 09 January 2026 01:06:30 +0000 (0:00:00.288) 0:00:22.176 ******** 2026-01-09 01:15:03.479770 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:03.479777 | orchestrator | 2026-01-09 01:15:03.479783 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-09 01:15:03.479789 | orchestrator | Friday 09 January 2026 01:07:03 +0000 (0:00:32.900) 0:00:55.076 ******** 2026-01-09 01:15:03.479796 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.479802 | orchestrator | 2026-01-09 01:15:03.479808 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-09 01:15:03.479815 | orchestrator | Friday 09 January 2026 01:07:18 +0000 (0:00:14.446) 0:01:09.523 ******** 2026-01-09 01:15:03.479822 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:03.479828 | orchestrator | 2026-01-09 01:15:03.479834 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-09 01:15:03.479841 | orchestrator | Friday 09 January 2026 01:07:31 +0000 (0:00:13.148) 0:01:22.672 ******** 2026-01-09 01:15:03.479859 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:03.479866 | orchestrator | 2026-01-09 01:15:03.479872 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-09 01:15:03.479879 | orchestrator | Friday 09 January 2026 01:07:32 +0000 (0:00:01.020) 0:01:23.692 ******** 2026-01-09 01:15:03.479885 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.479917 | orchestrator | 2026-01-09 01:15:03.479925 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-09 01:15:03.479931 | orchestrator | Friday 09 January 2026 01:07:32 +0000 (0:00:00.470) 0:01:24.162 ******** 2026-01-09 01:15:03.479938 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:03.479945 | orchestrator | 2026-01-09 01:15:03.479952 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-09 01:15:03.479959 | orchestrator | Friday 09 January 2026 01:07:33 +0000 (0:00:00.497) 0:01:24.660 ******** 2026-01-09 01:15:03.479966 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:03.479972 | orchestrator | 2026-01-09 01:15:03.479979 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-09 01:15:03.479986 | orchestrator | Friday 09 January 2026 01:07:56 +0000 (0:00:23.436) 0:01:48.096 ******** 2026-01-09 01:15:03.480019 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.480026 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480033 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480039 | orchestrator | 2026-01-09 01:15:03.480045 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-09 01:15:03.480052 | orchestrator | 2026-01-09 01:15:03.480058 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-09 01:15:03.480064 | orchestrator | Friday 09 January 2026 01:07:56 +0000 (0:00:00.372) 0:01:48.468 ******** 2026-01-09 01:15:03.480071 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:03.480077 | orchestrator | 2026-01-09 01:15:03.480083 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-09 01:15:03.480089 | orchestrator | Friday 09 January 2026 01:07:57 +0000 (0:00:00.938) 0:01:49.407 ******** 2026-01-09 01:15:03.480101 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480124 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480132 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.480138 | orchestrator | 2026-01-09 01:15:03.480145 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-09 01:15:03.480152 | orchestrator | Friday 09 January 2026 01:08:00 +0000 (0:00:02.094) 0:01:51.501 ******** 2026-01-09 01:15:03.480158 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480165 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480171 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.480178 | orchestrator | 2026-01-09 01:15:03.480185 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-09 01:15:03.480191 | orchestrator | Friday 09 January 2026 01:08:02 +0000 (0:00:02.903) 0:01:54.405 ******** 2026-01-09 01:15:03.480198 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.480205 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480212 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480218 | orchestrator | 2026-01-09 01:15:03.480225 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-09 01:15:03.480231 | orchestrator | Friday 09 January 2026 01:08:03 +0000 (0:00:00.334) 0:01:54.739 ******** 2026-01-09 01:15:03.480238 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-09 01:15:03.480245 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480251 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-09 01:15:03.480258 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480265 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-09 01:15:03.480286 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-09 01:15:03.480293 | orchestrator | 2026-01-09 01:15:03.480298 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-09 01:15:03.480304 | orchestrator | Friday 09 January 2026 01:08:13 +0000 (0:00:10.000) 0:02:04.739 ******** 2026-01-09 01:15:03.480310 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.480320 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480326 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480332 | orchestrator | 2026-01-09 01:15:03.480339 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-09 01:15:03.480345 | orchestrator | Friday 09 January 2026 01:08:13 +0000 (0:00:00.375) 0:02:05.115 ******** 2026-01-09 01:15:03.480351 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-09 01:15:03.480358 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.480364 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-09 01:15:03.480371 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480376 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-09 01:15:03.480383 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480389 | orchestrator | 2026-01-09 01:15:03.480394 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-09 01:15:03.480401 | orchestrator | Friday 09 January 2026 01:08:14 +0000 (0:00:00.659) 0:02:05.775 ******** 2026-01-09 01:15:03.480407 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480413 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480419 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.480425 | orchestrator | 2026-01-09 01:15:03.480431 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-09 01:15:03.480438 | orchestrator | Friday 09 January 2026 01:08:15 +0000 (0:00:00.807) 0:02:06.582 ******** 2026-01-09 01:15:03.480444 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480451 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480457 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.480475 | orchestrator | 2026-01-09 01:15:03.480482 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-09 01:15:03.480488 | orchestrator | Friday 09 January 2026 01:08:16 +0000 (0:00:01.209) 0:02:07.792 ******** 2026-01-09 01:15:03.480499 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480506 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480519 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.480526 | orchestrator | 2026-01-09 01:15:03.480532 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-09 01:15:03.480539 | orchestrator | Friday 09 January 2026 01:08:18 +0000 (0:00:02.317) 0:02:10.110 ******** 2026-01-09 01:15:03.480567 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480574 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480580 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:03.480586 | orchestrator | 2026-01-09 01:15:03.480598 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-09 01:15:03.480605 | orchestrator | Friday 09 January 2026 01:08:41 +0000 (0:00:22.876) 0:02:32.986 ******** 2026-01-09 01:15:03.480611 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480618 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480624 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:03.480654 | orchestrator | 2026-01-09 01:15:03.480661 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-09 01:15:03.480680 | orchestrator | Friday 09 January 2026 01:08:57 +0000 (0:00:15.675) 0:02:48.662 ******** 2026-01-09 01:15:03.480709 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:03.480716 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480722 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480729 | orchestrator | 2026-01-09 01:15:03.480735 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-09 01:15:03.480742 | orchestrator | Friday 09 January 2026 01:08:58 +0000 (0:00:00.922) 0:02:49.585 ******** 2026-01-09 01:15:03.480748 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480754 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480759 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.480765 | orchestrator | 2026-01-09 01:15:03.480771 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-09 01:15:03.480777 | orchestrator | Friday 09 January 2026 01:09:11 +0000 (0:00:13.806) 0:03:03.391 ******** 2026-01-09 01:15:03.480783 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.480790 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480796 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480802 | orchestrator | 2026-01-09 01:15:03.480808 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-09 01:15:03.480815 | orchestrator | Friday 09 January 2026 01:09:12 +0000 (0:00:01.045) 0:03:04.437 ******** 2026-01-09 01:15:03.480821 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.480827 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.480833 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.480840 | orchestrator | 2026-01-09 01:15:03.480846 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-09 01:15:03.480852 | orchestrator | 2026-01-09 01:15:03.480859 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-09 01:15:03.480865 | orchestrator | Friday 09 January 2026 01:09:13 +0000 (0:00:00.548) 0:03:04.986 ******** 2026-01-09 01:15:03.480872 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:03.480879 | orchestrator | 2026-01-09 01:15:03.480885 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-01-09 01:15:03.480892 | orchestrator | Friday 09 January 2026 01:09:14 +0000 (0:00:00.572) 0:03:05.558 ******** 2026-01-09 01:15:03.480898 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-09 01:15:03.480905 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-09 01:15:03.480911 | orchestrator | 2026-01-09 01:15:03.480918 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-01-09 01:15:03.480924 | orchestrator | Friday 09 January 2026 01:09:17 +0000 (0:00:03.277) 0:03:08.835 ******** 2026-01-09 01:15:03.480935 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-09 01:15:03.480942 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-09 01:15:03.480953 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-09 01:15:03.480959 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-09 01:15:03.480965 | orchestrator | 2026-01-09 01:15:03.480972 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-09 01:15:03.480978 | orchestrator | Friday 09 January 2026 01:09:25 +0000 (0:00:08.157) 0:03:16.993 ******** 2026-01-09 01:15:03.480984 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-09 01:15:03.480991 | orchestrator | 2026-01-09 01:15:03.480997 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-09 01:15:03.481003 | orchestrator | Friday 09 January 2026 01:09:29 +0000 (0:00:03.659) 0:03:20.652 ******** 2026-01-09 01:15:03.481009 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-09 01:15:03.481016 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-09 01:15:03.481022 | orchestrator | 2026-01-09 01:15:03.481028 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-09 01:15:03.481034 | orchestrator | Friday 09 January 2026 01:09:33 +0000 (0:00:04.055) 0:03:24.707 ******** 2026-01-09 01:15:03.481039 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-09 01:15:03.481046 | orchestrator | 2026-01-09 01:15:03.481053 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-01-09 01:15:03.481059 | orchestrator | Friday 09 January 2026 01:09:36 +0000 (0:00:03.762) 0:03:28.470 ******** 2026-01-09 01:15:03.481065 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-09 01:15:03.481071 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-09 01:15:03.481077 | orchestrator | 2026-01-09 01:15:03.481084 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-09 01:15:03.481096 | orchestrator | Friday 09 January 2026 01:09:45 +0000 (0:00:08.689) 0:03:37.160 ******** 2026-01-09 01:15:03.481108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.481118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.481134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.481149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.481158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.481165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.481171 | orchestrator | 2026-01-09 01:15:03.481178 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-09 01:15:03.481184 | orchestrator | Friday 09 January 2026 01:09:46 +0000 (0:00:01.255) 0:03:38.415 ******** 2026-01-09 01:15:03.481196 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.481202 | orchestrator | 2026-01-09 01:15:03.481209 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-09 01:15:03.481215 | orchestrator | Friday 09 January 2026 01:09:47 +0000 (0:00:00.145) 0:03:38.560 ******** 2026-01-09 01:15:03.481221 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.481228 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.481234 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.481240 | orchestrator | 2026-01-09 01:15:03.481247 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-09 01:15:03.481253 | orchestrator | Friday 09 January 2026 01:09:47 +0000 (0:00:00.360) 0:03:38.921 ******** 2026-01-09 01:15:03.481259 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-09 01:15:03.481265 | orchestrator | 2026-01-09 01:15:03.481290 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-09 01:15:03.481296 | orchestrator | Friday 09 January 2026 01:09:48 +0000 (0:00:00.940) 0:03:39.862 ******** 2026-01-09 01:15:03.481302 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.481308 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.481315 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.481321 | orchestrator | 2026-01-09 01:15:03.481327 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-09 01:15:03.481334 | orchestrator | Friday 09 January 2026 01:09:48 +0000 (0:00:00.318) 0:03:40.180 ******** 2026-01-09 01:15:03.481340 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:03.481346 | orchestrator | 2026-01-09 01:15:03.481353 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-09 01:15:03.481359 | orchestrator | Friday 09 January 2026 01:09:49 +0000 (0:00:00.565) 0:03:40.746 ******** 2026-01-09 01:15:03.481369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.481381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.481389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.481397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.481404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.481418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.481425 | orchestrator | 2026-01-09 01:15:03.481431 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-09 01:15:03.481437 | orchestrator | Friday 09 January 2026 01:09:52 +0000 (0:00:02.876) 0:03:43.623 ******** 2026-01-09 01:15:03.481444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 01:15:03.481460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.481466 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.481476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 01:15:03.481484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.481491 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.481502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 01:15:03.481513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.481518 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.481522 | orchestrator | 2026-01-09 01:15:03.481526 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-09 01:15:03.481529 | orchestrator | Friday 09 January 2026 01:09:52 +0000 (0:00:00.612) 0:03:44.236 ******** 2026-01-09 01:15:03.481536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 01:15:03.481540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.481544 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.481941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 01:15:03.481973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.481980 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.481987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 01:15:03.481998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.482005 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.482041 | orchestrator | 2026-01-09 01:15:03.482051 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-09 01:15:03.482058 | orchestrator | Friday 09 January 2026 01:09:53 +0000 (0:00:00.853) 0:03:45.089 ******** 2026-01-09 01:15:03.482072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.482085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.482095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.482102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.482112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.482123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.482129 | orchestrator | 2026-01-09 01:15:03.482136 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-09 01:15:03.482143 | orchestrator | Friday 09 January 2026 01:09:56 +0000 (0:00:02.816) 0:03:47.905 ******** 2026-01-09 01:15:03.482149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.482160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.482171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.482182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.482214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.482219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.482223 | orchestrator | 2026-01-09 01:15:03.482227 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-09 01:15:03.482231 | orchestrator | Friday 09 January 2026 01:10:02 +0000 (0:00:05.832) 0:03:53.738 ******** 2026-01-09 01:15:03.482237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 01:15:03.482248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.482255 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.482261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 01:15:03.482266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.482286 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.482295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-09 01:15:03.482338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.482346 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.482352 | orchestrator | 2026-01-09 01:15:03.482358 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-09 01:15:03.482364 | orchestrator | Friday 09 January 2026 01:10:02 +0000 (0:00:00.602) 0:03:54.341 ******** 2026-01-09 01:15:03.482371 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.482377 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:03.482383 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:03.482389 | orchestrator | 2026-01-09 01:15:03.482400 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-01-09 01:15:03.482406 | orchestrator | Friday 09 January 2026 01:10:04 +0000 (0:00:01.643) 0:03:55.984 ******** 2026-01-09 01:15:03.482413 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.482419 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.482425 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.482431 | orchestrator | 2026-01-09 01:15:03.482437 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-01-09 01:15:03.482444 | orchestrator | Friday 09 January 2026 01:10:04 +0000 (0:00:00.317) 0:03:56.301 ******** 2026-01-09 01:15:03.482450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.482464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.482478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:03.482486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.482492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.482499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.482506 | orchestrator | 2026-01-09 01:15:03.482513 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-09 01:15:03.482519 | orchestrator | Friday 09 January 2026 01:10:07 +0000 (0:00:02.232) 0:03:58.534 ******** 2026-01-09 01:15:03.482525 | orchestrator | 2026-01-09 01:15:03.482532 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-09 01:15:03.482538 | orchestrator | Friday 09 January 2026 01:10:07 +0000 (0:00:00.139) 0:03:58.673 ******** 2026-01-09 01:15:03.482545 | orchestrator | 2026-01-09 01:15:03.482551 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-09 01:15:03.482557 | orchestrator | Friday 09 January 2026 01:10:07 +0000 (0:00:00.131) 0:03:58.805 ******** 2026-01-09 01:15:03.482568 | orchestrator | 2026-01-09 01:15:03.482575 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-09 01:15:03.482581 | orchestrator | Friday 09 January 2026 01:10:07 +0000 (0:00:00.138) 0:03:58.944 ******** 2026-01-09 01:15:03.482588 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.482594 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:03.482601 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:03.482607 | orchestrator | 2026-01-09 01:15:03.482614 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-09 01:15:03.482620 | orchestrator | Friday 09 January 2026 01:10:24 +0000 (0:00:17.354) 0:04:16.298 ******** 2026-01-09 01:15:03.482627 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.482634 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:03.482640 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:03.482647 | orchestrator | 2026-01-09 01:15:03.482656 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-09 01:15:03.482662 | orchestrator | 2026-01-09 01:15:03.482668 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-09 01:15:03.482675 | orchestrator | Friday 09 January 2026 01:10:35 +0000 (0:00:10.756) 0:04:27.055 ******** 2026-01-09 01:15:03.482682 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:03.482689 | orchestrator | 2026-01-09 01:15:03.482695 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-09 01:15:03.482701 | orchestrator | Friday 09 January 2026 01:10:36 +0000 (0:00:01.270) 0:04:28.325 ******** 2026-01-09 01:15:03.482708 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.482714 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.482721 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.482727 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.482733 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.482739 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.482745 | orchestrator | 2026-01-09 01:15:03.482751 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-09 01:15:03.482757 | orchestrator | Friday 09 January 2026 01:10:37 +0000 (0:00:00.649) 0:04:28.975 ******** 2026-01-09 01:15:03.482763 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.482768 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.482774 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.482780 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 01:15:03.482786 | orchestrator | 2026-01-09 01:15:03.482793 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-09 01:15:03.482803 | orchestrator | Friday 09 January 2026 01:10:38 +0000 (0:00:01.253) 0:04:30.228 ******** 2026-01-09 01:15:03.482810 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-09 01:15:03.482816 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-09 01:15:03.482823 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-09 01:15:03.482829 | orchestrator | 2026-01-09 01:15:03.482835 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-09 01:15:03.482842 | orchestrator | Friday 09 January 2026 01:10:39 +0000 (0:00:00.754) 0:04:30.982 ******** 2026-01-09 01:15:03.482848 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-09 01:15:03.482854 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-09 01:15:03.482860 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-09 01:15:03.482867 | orchestrator | 2026-01-09 01:15:03.482873 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-09 01:15:03.482879 | orchestrator | Friday 09 January 2026 01:10:41 +0000 (0:00:01.530) 0:04:32.513 ******** 2026-01-09 01:15:03.482885 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-09 01:15:03.482892 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.482902 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-09 01:15:03.482909 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.482915 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-09 01:15:03.482921 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.482928 | orchestrator | 2026-01-09 01:15:03.482934 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-09 01:15:03.482940 | orchestrator | Friday 09 January 2026 01:10:41 +0000 (0:00:00.523) 0:04:33.037 ******** 2026-01-09 01:15:03.482947 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-09 01:15:03.482953 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-09 01:15:03.482960 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.482966 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-09 01:15:03.482972 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-09 01:15:03.482979 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.482985 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-09 01:15:03.482991 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-09 01:15:03.482997 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.483004 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-09 01:15:03.483010 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-09 01:15:03.483017 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-09 01:15:03.483024 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-09 01:15:03.483030 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-09 01:15:03.483036 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-09 01:15:03.483042 | orchestrator | 2026-01-09 01:15:03.483049 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-09 01:15:03.483055 | orchestrator | Friday 09 January 2026 01:10:43 +0000 (0:00:02.144) 0:04:35.181 ******** 2026-01-09 01:15:03.483061 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.483067 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.483074 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.483081 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.483087 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.483093 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.483100 | orchestrator | 2026-01-09 01:15:03.483106 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-09 01:15:03.483115 | orchestrator | Friday 09 January 2026 01:10:44 +0000 (0:00:01.263) 0:04:36.445 ******** 2026-01-09 01:15:03.483122 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.483128 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.483134 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.483140 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.483146 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.483152 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.483158 | orchestrator | 2026-01-09 01:15:03.483165 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-09 01:15:03.483172 | orchestrator | Friday 09 January 2026 01:10:46 +0000 (0:00:01.973) 0:04:38.419 ******** 2026-01-09 01:15:03.483180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483196 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483227 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483343 | orchestrator | 2026-01-09 01:15:03.483349 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-09 01:15:03.483356 | orchestrator | Friday 09 January 2026 01:10:49 +0000 (0:00:02.226) 0:04:40.645 ******** 2026-01-09 01:15:03.483362 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:03.483370 | orchestrator | 2026-01-09 01:15:03.483377 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-09 01:15:03.483384 | orchestrator | Friday 09 January 2026 01:10:50 +0000 (0:00:01.274) 0:04:41.920 ******** 2026-01-09 01:15:03.483391 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483409 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483449 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.483475 | orchestrator | 2026-01-09 01:15:03.483479 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-09 01:15:03.483483 | orchestrator | Friday 09 January 2026 01:10:53 +0000 (0:00:03.517) 0:04:45.437 ******** 2026-01-09 01:15:03.483489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.483494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.483498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.483502 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.483506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.483514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.483521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.483525 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.483529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.483533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.483537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.483543 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.483548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-09 01:15:03.483553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.483557 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.483563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-09 01:15:03.483567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.483571 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.483575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-09 01:15:03.483579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.483585 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.483589 | orchestrator | 2026-01-09 01:15:03.483593 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-09 01:15:03.483597 | orchestrator | Friday 09 January 2026 01:10:55 +0000 (0:00:01.768) 0:04:47.206 ******** 2026-01-09 01:15:03.483602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.483607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.483839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.483855 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.483861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.483868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.483882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.483889 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.483900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.483907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.483920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.483927 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.483933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-09 01:15:03.483940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.483965 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.483973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-09 01:15:03.483983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.483989 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.483994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-09 01:15:03.484006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.484011 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.484018 | orchestrator | 2026-01-09 01:15:03.484024 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-09 01:15:03.484031 | orchestrator | Friday 09 January 2026 01:10:57 +0000 (0:00:02.250) 0:04:49.457 ******** 2026-01-09 01:15:03.484036 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.484042 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.484048 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.484054 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-09 01:15:03.484061 | orchestrator | 2026-01-09 01:15:03.484067 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-09 01:15:03.484073 | orchestrator | Friday 09 January 2026 01:10:59 +0000 (0:00:01.116) 0:04:50.573 ******** 2026-01-09 01:15:03.484080 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-09 01:15:03.484090 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-09 01:15:03.484097 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-09 01:15:03.484103 | orchestrator | 2026-01-09 01:15:03.484110 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-09 01:15:03.484116 | orchestrator | Friday 09 January 2026 01:11:00 +0000 (0:00:01.077) 0:04:51.651 ******** 2026-01-09 01:15:03.484122 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-09 01:15:03.484128 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-09 01:15:03.484134 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-09 01:15:03.484140 | orchestrator | 2026-01-09 01:15:03.484145 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-09 01:15:03.484151 | orchestrator | Friday 09 January 2026 01:11:01 +0000 (0:00:00.968) 0:04:52.620 ******** 2026-01-09 01:15:03.484157 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:15:03.484163 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:15:03.484168 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:15:03.484174 | orchestrator | 2026-01-09 01:15:03.484179 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-09 01:15:03.484186 | orchestrator | Friday 09 January 2026 01:11:01 +0000 (0:00:00.522) 0:04:53.142 ******** 2026-01-09 01:15:03.484192 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:15:03.484198 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:15:03.484203 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:15:03.484209 | orchestrator | 2026-01-09 01:15:03.484215 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-09 01:15:03.484221 | orchestrator | Friday 09 January 2026 01:11:02 +0000 (0:00:00.864) 0:04:54.006 ******** 2026-01-09 01:15:03.484227 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-09 01:15:03.484234 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-09 01:15:03.484240 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-09 01:15:03.484245 | orchestrator | 2026-01-09 01:15:03.484251 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-09 01:15:03.484257 | orchestrator | Friday 09 January 2026 01:11:03 +0000 (0:00:01.420) 0:04:55.427 ******** 2026-01-09 01:15:03.484262 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-09 01:15:03.484268 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-09 01:15:03.484289 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-09 01:15:03.484295 | orchestrator | 2026-01-09 01:15:03.484301 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-09 01:15:03.484307 | orchestrator | Friday 09 January 2026 01:11:05 +0000 (0:00:01.365) 0:04:56.792 ******** 2026-01-09 01:15:03.484313 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-09 01:15:03.484319 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-09 01:15:03.484325 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-09 01:15:03.484334 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-09 01:15:03.484340 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-09 01:15:03.484345 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-09 01:15:03.484351 | orchestrator | 2026-01-09 01:15:03.484357 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-09 01:15:03.484363 | orchestrator | Friday 09 January 2026 01:11:09 +0000 (0:00:04.020) 0:05:00.812 ******** 2026-01-09 01:15:03.484369 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.484375 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.484381 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.484387 | orchestrator | 2026-01-09 01:15:03.484394 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-09 01:15:03.484400 | orchestrator | Friday 09 January 2026 01:11:09 +0000 (0:00:00.552) 0:05:01.365 ******** 2026-01-09 01:15:03.484406 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.484419 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.484425 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.484431 | orchestrator | 2026-01-09 01:15:03.484437 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-09 01:15:03.484444 | orchestrator | Friday 09 January 2026 01:11:10 +0000 (0:00:00.340) 0:05:01.705 ******** 2026-01-09 01:15:03.484450 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.484457 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.484464 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.484470 | orchestrator | 2026-01-09 01:15:03.484477 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-09 01:15:03.484483 | orchestrator | Friday 09 January 2026 01:11:11 +0000 (0:00:01.206) 0:05:02.911 ******** 2026-01-09 01:15:03.484496 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-09 01:15:03.484504 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-09 01:15:03.484510 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-09 01:15:03.484518 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-09 01:15:03.484524 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-09 01:15:03.484531 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-09 01:15:03.484538 | orchestrator | 2026-01-09 01:15:03.484544 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-09 01:15:03.484552 | orchestrator | Friday 09 January 2026 01:11:14 +0000 (0:00:03.320) 0:05:06.232 ******** 2026-01-09 01:15:03.484559 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-09 01:15:03.484566 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-09 01:15:03.484573 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-09 01:15:03.484579 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-09 01:15:03.484586 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.484592 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-09 01:15:03.484599 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.484606 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-09 01:15:03.484612 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.484618 | orchestrator | 2026-01-09 01:15:03.484624 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-09 01:15:03.484631 | orchestrator | Friday 09 January 2026 01:11:18 +0000 (0:00:03.338) 0:05:09.571 ******** 2026-01-09 01:15:03.484638 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.484644 | orchestrator | 2026-01-09 01:15:03.484651 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-09 01:15:03.484658 | orchestrator | Friday 09 January 2026 01:11:18 +0000 (0:00:00.130) 0:05:09.701 ******** 2026-01-09 01:15:03.484665 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.484671 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.484678 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.484685 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.484692 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.484699 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.484706 | orchestrator | 2026-01-09 01:15:03.484712 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-09 01:15:03.484719 | orchestrator | Friday 09 January 2026 01:11:18 +0000 (0:00:00.580) 0:05:10.282 ******** 2026-01-09 01:15:03.484725 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-09 01:15:03.484739 | orchestrator | 2026-01-09 01:15:03.484746 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-09 01:15:03.484753 | orchestrator | Friday 09 January 2026 01:11:19 +0000 (0:00:00.726) 0:05:11.009 ******** 2026-01-09 01:15:03.484759 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.484765 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.484772 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.484778 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.484785 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.484792 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.484798 | orchestrator | 2026-01-09 01:15:03.484805 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-09 01:15:03.484812 | orchestrator | Friday 09 January 2026 01:11:20 +0000 (0:00:00.827) 0:05:11.837 ******** 2026-01-09 01:15:03.484824 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484940 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484951 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.484958 | orchestrator | 2026-01-09 01:15:03.484963 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-09 01:15:03.484969 | orchestrator | Friday 09 January 2026 01:11:23 +0000 (0:00:03.366) 0:05:15.203 ******** 2026-01-09 01:15:03.484975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.484984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.484993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.484999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.485012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.485017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.485024 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.485033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.485042 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.485047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.485057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.485063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.485072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.485078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.485084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.485090 | orchestrator | 2026-01-09 01:15:03.485098 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-09 01:15:03.485104 | orchestrator | Friday 09 January 2026 01:11:30 +0000 (0:00:06.616) 0:05:21.820 ******** 2026-01-09 01:15:03.485109 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.485115 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.485120 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.485126 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.485132 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.485137 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.485144 | orchestrator | 2026-01-09 01:15:03.485149 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-09 01:15:03.485155 | orchestrator | Friday 09 January 2026 01:11:31 +0000 (0:00:01.371) 0:05:23.192 ******** 2026-01-09 01:15:03.485162 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-09 01:15:03.485167 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-09 01:15:03.485171 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-09 01:15:03.485175 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-09 01:15:03.485182 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-09 01:15:03.485187 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-09 01:15:03.485197 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.485204 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-09 01:15:03.485214 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-09 01:15:03.485221 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.485226 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-09 01:15:03.485237 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.485243 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-09 01:15:03.485249 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-09 01:15:03.485256 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-09 01:15:03.485261 | orchestrator | 2026-01-09 01:15:03.485267 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-09 01:15:03.485307 | orchestrator | Friday 09 January 2026 01:11:35 +0000 (0:00:03.893) 0:05:27.086 ******** 2026-01-09 01:15:03.485314 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.485320 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.485327 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.485334 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.485341 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.485347 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.485353 | orchestrator | 2026-01-09 01:15:03.485359 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-09 01:15:03.485365 | orchestrator | Friday 09 January 2026 01:11:36 +0000 (0:00:00.631) 0:05:27.717 ******** 2026-01-09 01:15:03.485372 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-09 01:15:03.485378 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-09 01:15:03.485384 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-09 01:15:03.485390 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-09 01:15:03.485396 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-09 01:15:03.485402 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-09 01:15:03.485409 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-09 01:15:03.485415 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-09 01:15:03.485421 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-09 01:15:03.485427 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-09 01:15:03.485434 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.485441 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-09 01:15:03.485447 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.485453 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-09 01:15:03.485460 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.485466 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-09 01:15:03.485475 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-09 01:15:03.485481 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-09 01:15:03.485487 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-09 01:15:03.485494 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-09 01:15:03.485504 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-09 01:15:03.485512 | orchestrator | 2026-01-09 01:15:03.485519 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-09 01:15:03.485526 | orchestrator | Friday 09 January 2026 01:11:41 +0000 (0:00:05.510) 0:05:33.227 ******** 2026-01-09 01:15:03.485533 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-09 01:15:03.485539 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-09 01:15:03.485546 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-09 01:15:03.485552 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-09 01:15:03.485558 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-09 01:15:03.485564 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-09 01:15:03.485574 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-09 01:15:03.485580 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-09 01:15:03.485587 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-09 01:15:03.485593 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-09 01:15:03.485599 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-09 01:15:03.485605 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-09 01:15:03.485611 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-09 01:15:03.485617 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.485623 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-09 01:15:03.485629 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.485635 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-09 01:15:03.485641 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.485648 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-09 01:15:03.485654 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-09 01:15:03.485660 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-09 01:15:03.485666 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-09 01:15:03.485672 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-09 01:15:03.485678 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-09 01:15:03.485684 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-09 01:15:03.485690 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-09 01:15:03.485696 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-09 01:15:03.485702 | orchestrator | 2026-01-09 01:15:03.485709 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-09 01:15:03.485715 | orchestrator | Friday 09 January 2026 01:11:49 +0000 (0:00:07.631) 0:05:40.859 ******** 2026-01-09 01:15:03.485721 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.485728 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.485734 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.485740 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.485746 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.485758 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.485765 | orchestrator | 2026-01-09 01:15:03.485771 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-09 01:15:03.485777 | orchestrator | Friday 09 January 2026 01:11:50 +0000 (0:00:00.843) 0:05:41.702 ******** 2026-01-09 01:15:03.485784 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.485790 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.485797 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.485804 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.485810 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.485817 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.485824 | orchestrator | 2026-01-09 01:15:03.485830 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-09 01:15:03.485837 | orchestrator | Friday 09 January 2026 01:11:50 +0000 (0:00:00.636) 0:05:42.339 ******** 2026-01-09 01:15:03.485844 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.485850 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.485857 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.485864 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.485868 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.485872 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.485876 | orchestrator | 2026-01-09 01:15:03.485879 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-09 01:15:03.485883 | orchestrator | Friday 09 January 2026 01:11:53 +0000 (0:00:02.262) 0:05:44.602 ******** 2026-01-09 01:15:03.485887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.485898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.485904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.485908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.485915 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.485919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.485926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.485934 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.485946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-09 01:15:03.485956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-09 01:15:03.485962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.485973 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.485979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-09 01:15:03.485986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.485990 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.485994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-09 01:15:03.485998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.486006 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.486010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-09 01:15:03.486055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-09 01:15:03.486067 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.486074 | orchestrator | 2026-01-09 01:15:03.486080 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-09 01:15:03.486087 | orchestrator | Friday 09 January 2026 01:11:54 +0000 (0:00:01.344) 0:05:45.946 ******** 2026-01-09 01:15:03.486094 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-09 01:15:03.486099 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-09 01:15:03.486102 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.486106 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-09 01:15:03.486110 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-09 01:15:03.486114 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.486117 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-09 01:15:03.486121 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-09 01:15:03.486125 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.486129 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-09 01:15:03.486133 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-09 01:15:03.486137 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.486143 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-09 01:15:03.486151 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-09 01:15:03.486160 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.486166 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-09 01:15:03.486173 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-09 01:15:03.486179 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.486185 | orchestrator | 2026-01-09 01:15:03.486191 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-01-09 01:15:03.486197 | orchestrator | Friday 09 January 2026 01:11:55 +0000 (0:00:00.962) 0:05:46.909 ******** 2026-01-09 01:15:03.486204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486317 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486326 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486341 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:03.486357 | orchestrator | 2026-01-09 01:15:03.486361 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-09 01:15:03.486365 | orchestrator | Friday 09 January 2026 01:11:58 +0000 (0:00:02.920) 0:05:49.829 ******** 2026-01-09 01:15:03.486369 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.486373 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.486377 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.486380 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.486384 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.486388 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.486392 | orchestrator | 2026-01-09 01:15:03.486398 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-09 01:15:03.486407 | orchestrator | Friday 09 January 2026 01:11:59 +0000 (0:00:00.823) 0:05:50.652 ******** 2026-01-09 01:15:03.486415 | orchestrator | 2026-01-09 01:15:03.486421 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-09 01:15:03.486428 | orchestrator | Friday 09 January 2026 01:11:59 +0000 (0:00:00.150) 0:05:50.803 ******** 2026-01-09 01:15:03.486434 | orchestrator | 2026-01-09 01:15:03.486439 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-09 01:15:03.486445 | orchestrator | Friday 09 January 2026 01:11:59 +0000 (0:00:00.133) 0:05:50.936 ******** 2026-01-09 01:15:03.486451 | orchestrator | 2026-01-09 01:15:03.486457 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-09 01:15:03.486464 | orchestrator | Friday 09 January 2026 01:11:59 +0000 (0:00:00.137) 0:05:51.073 ******** 2026-01-09 01:15:03.486471 | orchestrator | 2026-01-09 01:15:03.486478 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-09 01:15:03.486482 | orchestrator | Friday 09 January 2026 01:11:59 +0000 (0:00:00.132) 0:05:51.206 ******** 2026-01-09 01:15:03.486486 | orchestrator | 2026-01-09 01:15:03.486490 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-09 01:15:03.486494 | orchestrator | Friday 09 January 2026 01:11:59 +0000 (0:00:00.129) 0:05:51.335 ******** 2026-01-09 01:15:03.486498 | orchestrator | 2026-01-09 01:15:03.486502 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-09 01:15:03.486505 | orchestrator | Friday 09 January 2026 01:12:00 +0000 (0:00:00.307) 0:05:51.643 ******** 2026-01-09 01:15:03.486509 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.486513 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:03.486517 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:03.486521 | orchestrator | 2026-01-09 01:15:03.486525 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-09 01:15:03.486529 | orchestrator | Friday 09 January 2026 01:12:07 +0000 (0:00:07.402) 0:05:59.045 ******** 2026-01-09 01:15:03.486533 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.486537 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:03.486540 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:03.486544 | orchestrator | 2026-01-09 01:15:03.486548 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-09 01:15:03.486552 | orchestrator | Friday 09 January 2026 01:12:25 +0000 (0:00:18.195) 0:06:17.241 ******** 2026-01-09 01:15:03.486556 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.486564 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.486568 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.486572 | orchestrator | 2026-01-09 01:15:03.486576 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-09 01:15:03.486583 | orchestrator | Friday 09 January 2026 01:12:47 +0000 (0:00:21.886) 0:06:39.128 ******** 2026-01-09 01:15:03.486587 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.486590 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.486594 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.486598 | orchestrator | 2026-01-09 01:15:03.486602 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-09 01:15:03.486606 | orchestrator | Friday 09 January 2026 01:13:18 +0000 (0:00:30.935) 0:07:10.063 ******** 2026-01-09 01:15:03.486610 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-01-09 01:15:03.486614 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-01-09 01:15:03.486618 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-01-09 01:15:03.486622 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.486626 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.486629 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.486633 | orchestrator | 2026-01-09 01:15:03.486637 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-09 01:15:03.486641 | orchestrator | Friday 09 January 2026 01:13:24 +0000 (0:00:06.232) 0:07:16.296 ******** 2026-01-09 01:15:03.486645 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.486649 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.486653 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.486657 | orchestrator | 2026-01-09 01:15:03.486661 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-09 01:15:03.486665 | orchestrator | Friday 09 January 2026 01:13:25 +0000 (0:00:00.900) 0:07:17.196 ******** 2026-01-09 01:15:03.486668 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:15:03.486672 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:15:03.486676 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:15:03.486680 | orchestrator | 2026-01-09 01:15:03.486688 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-09 01:15:03.486692 | orchestrator | Friday 09 January 2026 01:13:46 +0000 (0:00:20.504) 0:07:37.701 ******** 2026-01-09 01:15:03.486696 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.486700 | orchestrator | 2026-01-09 01:15:03.486704 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-09 01:15:03.486708 | orchestrator | Friday 09 January 2026 01:13:46 +0000 (0:00:00.129) 0:07:37.830 ******** 2026-01-09 01:15:03.486711 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.486715 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.486719 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.486722 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.486726 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.486730 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-09 01:15:03.486735 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-09 01:15:03.486739 | orchestrator | 2026-01-09 01:15:03.486743 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-09 01:15:03.486746 | orchestrator | Friday 09 January 2026 01:14:09 +0000 (0:00:22.648) 0:08:00.479 ******** 2026-01-09 01:15:03.486750 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.486754 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.486758 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.486762 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.486765 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.486774 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.486778 | orchestrator | 2026-01-09 01:15:03.486782 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-09 01:15:03.486786 | orchestrator | Friday 09 January 2026 01:14:19 +0000 (0:00:10.226) 0:08:10.705 ******** 2026-01-09 01:15:03.486790 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.486793 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.486797 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.486801 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.486804 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.486808 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-01-09 01:15:03.486812 | orchestrator | 2026-01-09 01:15:03.486816 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-09 01:15:03.486819 | orchestrator | Friday 09 January 2026 01:14:23 +0000 (0:00:03.905) 0:08:14.611 ******** 2026-01-09 01:15:03.486823 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-09 01:15:03.486827 | orchestrator | 2026-01-09 01:15:03.486831 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-09 01:15:03.486834 | orchestrator | Friday 09 January 2026 01:14:39 +0000 (0:00:16.211) 0:08:30.822 ******** 2026-01-09 01:15:03.486838 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-09 01:15:03.486842 | orchestrator | 2026-01-09 01:15:03.486846 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-09 01:15:03.486850 | orchestrator | Friday 09 January 2026 01:14:40 +0000 (0:00:01.278) 0:08:32.100 ******** 2026-01-09 01:15:03.486855 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.486864 | orchestrator | 2026-01-09 01:15:03.486871 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-09 01:15:03.486878 | orchestrator | Friday 09 January 2026 01:14:41 +0000 (0:00:01.267) 0:08:33.368 ******** 2026-01-09 01:15:03.486884 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-09 01:15:03.486891 | orchestrator | 2026-01-09 01:15:03.486897 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-01-09 01:15:03.486904 | orchestrator | Friday 09 January 2026 01:14:54 +0000 (0:00:12.281) 0:08:45.650 ******** 2026-01-09 01:15:03.486911 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:15:03.486915 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:15:03.486919 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:15:03.486926 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:03.486932 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:15:03.486938 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:15:03.486945 | orchestrator | 2026-01-09 01:15:03.486955 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-09 01:15:03.486962 | orchestrator | 2026-01-09 01:15:03.486970 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-09 01:15:03.486974 | orchestrator | Friday 09 January 2026 01:14:56 +0000 (0:00:02.194) 0:08:47.844 ******** 2026-01-09 01:15:03.486978 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:03.486982 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:03.486985 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:03.486989 | orchestrator | 2026-01-09 01:15:03.486993 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-09 01:15:03.486997 | orchestrator | 2026-01-09 01:15:03.487001 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-09 01:15:03.487004 | orchestrator | Friday 09 January 2026 01:14:57 +0000 (0:00:01.142) 0:08:48.986 ******** 2026-01-09 01:15:03.487008 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.487012 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.487016 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.487019 | orchestrator | 2026-01-09 01:15:03.487023 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-09 01:15:03.487027 | orchestrator | 2026-01-09 01:15:03.487039 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-09 01:15:03.487045 | orchestrator | Friday 09 January 2026 01:14:58 +0000 (0:00:00.554) 0:08:49.541 ******** 2026-01-09 01:15:03.487051 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-09 01:15:03.487056 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-09 01:15:03.487062 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-09 01:15:03.487067 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-09 01:15:03.487078 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-09 01:15:03.487084 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-09 01:15:03.487090 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:15:03.487097 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-09 01:15:03.487103 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-09 01:15:03.487109 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-09 01:15:03.487115 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-09 01:15:03.487119 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-09 01:15:03.487126 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-09 01:15:03.487132 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:15:03.487138 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-09 01:15:03.487144 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-09 01:15:03.487150 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-09 01:15:03.487156 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-09 01:15:03.487162 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-09 01:15:03.487169 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-09 01:15:03.487176 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:15:03.487184 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-09 01:15:03.487188 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-09 01:15:03.487192 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-09 01:15:03.487195 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-09 01:15:03.487199 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-09 01:15:03.487203 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-09 01:15:03.487207 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.487211 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-09 01:15:03.487215 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-09 01:15:03.487218 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-09 01:15:03.487222 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-09 01:15:03.487226 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-09 01:15:03.487233 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-09 01:15:03.487239 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.487246 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-09 01:15:03.487253 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-09 01:15:03.487259 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-09 01:15:03.487263 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-09 01:15:03.487267 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-09 01:15:03.487304 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-09 01:15:03.487313 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.487320 | orchestrator | 2026-01-09 01:15:03.487329 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-09 01:15:03.487333 | orchestrator | 2026-01-09 01:15:03.487337 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-09 01:15:03.487341 | orchestrator | Friday 09 January 2026 01:14:59 +0000 (0:00:01.393) 0:08:50.935 ******** 2026-01-09 01:15:03.487345 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-09 01:15:03.487349 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-09 01:15:03.487353 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.487357 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-09 01:15:03.487360 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-09 01:15:03.487367 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.487371 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-09 01:15:03.487375 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-09 01:15:03.487379 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.487383 | orchestrator | 2026-01-09 01:15:03.487386 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-09 01:15:03.487390 | orchestrator | 2026-01-09 01:15:03.487394 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-09 01:15:03.487398 | orchestrator | Friday 09 January 2026 01:15:00 +0000 (0:00:00.817) 0:08:51.752 ******** 2026-01-09 01:15:03.487402 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.487405 | orchestrator | 2026-01-09 01:15:03.487409 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-09 01:15:03.487413 | orchestrator | 2026-01-09 01:15:03.487417 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-09 01:15:03.487421 | orchestrator | Friday 09 January 2026 01:15:00 +0000 (0:00:00.707) 0:08:52.460 ******** 2026-01-09 01:15:03.487424 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:03.487428 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:03.487432 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:03.487436 | orchestrator | 2026-01-09 01:15:03.487440 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:15:03.487443 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:15:03.487448 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-01-09 01:15:03.487458 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-09 01:15:03.487462 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-09 01:15:03.487465 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-09 01:15:03.487469 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-09 01:15:03.487473 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-09 01:15:03.487477 | orchestrator | 2026-01-09 01:15:03.487480 | orchestrator | 2026-01-09 01:15:03.487484 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:15:03.487488 | orchestrator | Friday 09 January 2026 01:15:01 +0000 (0:00:00.448) 0:08:52.908 ******** 2026-01-09 01:15:03.487492 | orchestrator | =============================================================================== 2026-01-09 01:15:03.487495 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.90s 2026-01-09 01:15:03.487502 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.94s 2026-01-09 01:15:03.487506 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 23.44s 2026-01-09 01:15:03.487513 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.88s 2026-01-09 01:15:03.487521 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.65s 2026-01-09 01:15:03.487529 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.89s 2026-01-09 01:15:03.487535 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 20.50s 2026-01-09 01:15:03.487541 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 18.20s 2026-01-09 01:15:03.487547 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.35s 2026-01-09 01:15:03.487552 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 16.21s 2026-01-09 01:15:03.487558 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.68s 2026-01-09 01:15:03.487563 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.45s 2026-01-09 01:15:03.487569 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.81s 2026-01-09 01:15:03.487575 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.15s 2026-01-09 01:15:03.487580 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.28s 2026-01-09 01:15:03.487586 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.76s 2026-01-09 01:15:03.487592 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.23s 2026-01-09 01:15:03.487598 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 10.00s 2026-01-09 01:15:03.487605 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.69s 2026-01-09 01:15:03.487611 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 8.16s 2026-01-09 01:15:03.487618 | orchestrator | 2026-01-09 01:15:03 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:15:03.487628 | orchestrator | 2026-01-09 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:06.538125 | orchestrator | 2026-01-09 01:15:06 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:15:06.538177 | orchestrator | 2026-01-09 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:09.589603 | orchestrator | 2026-01-09 01:15:09 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:15:09.589681 | orchestrator | 2026-01-09 01:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:12.661606 | orchestrator | 2026-01-09 01:15:12 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:15:12.661671 | orchestrator | 2026-01-09 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:15.702854 | orchestrator | 2026-01-09 01:15:15 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:15:15.702913 | orchestrator | 2026-01-09 01:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:18.745960 | orchestrator | 2026-01-09 01:15:18 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:15:18.746050 | orchestrator | 2026-01-09 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:21.795541 | orchestrator | 2026-01-09 01:15:21 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:15:21.795595 | orchestrator | 2026-01-09 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:24.852947 | orchestrator | 2026-01-09 01:15:24 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:15:24.853013 | orchestrator | 2026-01-09 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:27.894773 | orchestrator | 2026-01-09 01:15:27 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:15:27.894842 | orchestrator | 2026-01-09 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:30.947145 | orchestrator | 2026-01-09 01:15:30 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state STARTED 2026-01-09 01:15:30.947216 | orchestrator | 2026-01-09 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-09 01:15:33.995093 | orchestrator | 2026-01-09 01:15:33.995171 | orchestrator | 2026-01-09 01:15:33 | INFO  | Task 1bbbe364-8ea8-4aab-80a4-eb3262746ce4 is in state SUCCESS 2026-01-09 01:15:33.997229 | orchestrator | 2026-01-09 01:15:33.997305 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:15:33.997315 | orchestrator | 2026-01-09 01:15:33.997322 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:15:33.997329 | orchestrator | Friday 09 January 2026 01:10:38 +0000 (0:00:00.394) 0:00:00.394 ******** 2026-01-09 01:15:33.997335 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:33.997343 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:15:33.997349 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:15:33.997355 | orchestrator | 2026-01-09 01:15:33.997362 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:15:33.997368 | orchestrator | Friday 09 January 2026 01:10:39 +0000 (0:00:00.329) 0:00:00.723 ******** 2026-01-09 01:15:33.997375 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-09 01:15:33.997382 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-09 01:15:33.997388 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-09 01:15:33.997394 | orchestrator | 2026-01-09 01:15:33.997401 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-09 01:15:33.997407 | orchestrator | 2026-01-09 01:15:33.997413 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-09 01:15:33.997420 | orchestrator | Friday 09 January 2026 01:10:39 +0000 (0:00:00.448) 0:00:01.172 ******** 2026-01-09 01:15:33.997426 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:33.997433 | orchestrator | 2026-01-09 01:15:33.997440 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-01-09 01:15:33.997446 | orchestrator | Friday 09 January 2026 01:10:40 +0000 (0:00:00.569) 0:00:01.741 ******** 2026-01-09 01:15:33.997453 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-09 01:15:33.997460 | orchestrator | 2026-01-09 01:15:33.997466 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-01-09 01:15:33.997472 | orchestrator | Friday 09 January 2026 01:10:44 +0000 (0:00:04.109) 0:00:05.851 ******** 2026-01-09 01:15:33.997479 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-09 01:15:33.997488 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-09 01:15:33.997495 | orchestrator | 2026-01-09 01:15:33.997501 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-09 01:15:33.997508 | orchestrator | Friday 09 January 2026 01:10:51 +0000 (0:00:07.315) 0:00:13.166 ******** 2026-01-09 01:15:33.997514 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-09 01:15:33.997521 | orchestrator | 2026-01-09 01:15:33.997527 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-09 01:15:33.997544 | orchestrator | Friday 09 January 2026 01:10:54 +0000 (0:00:03.206) 0:00:16.372 ******** 2026-01-09 01:15:33.997551 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-09 01:15:33.997562 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-09 01:15:33.997596 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-09 01:15:33.997609 | orchestrator | 2026-01-09 01:15:33.997619 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-09 01:15:33.997628 | orchestrator | Friday 09 January 2026 01:11:04 +0000 (0:00:09.145) 0:00:25.518 ******** 2026-01-09 01:15:33.997638 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-09 01:15:33.997649 | orchestrator | 2026-01-09 01:15:33.997659 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-01-09 01:15:33.997670 | orchestrator | Friday 09 January 2026 01:11:08 +0000 (0:00:04.237) 0:00:29.755 ******** 2026-01-09 01:15:33.997680 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-09 01:15:33.997691 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-09 01:15:33.997702 | orchestrator | 2026-01-09 01:15:33.997712 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-09 01:15:33.997723 | orchestrator | Friday 09 January 2026 01:11:15 +0000 (0:00:07.220) 0:00:36.976 ******** 2026-01-09 01:15:33.997734 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-09 01:15:33.997742 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-09 01:15:33.997748 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-09 01:15:33.997754 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-09 01:15:33.997760 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-09 01:15:33.997768 | orchestrator | 2026-01-09 01:15:33.997775 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-09 01:15:33.997782 | orchestrator | Friday 09 January 2026 01:11:32 +0000 (0:00:17.377) 0:00:54.354 ******** 2026-01-09 01:15:33.997790 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:33.997800 | orchestrator | 2026-01-09 01:15:33.997811 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-09 01:15:33.997821 | orchestrator | Friday 09 January 2026 01:11:33 +0000 (0:00:00.677) 0:00:55.032 ******** 2026-01-09 01:15:33.997831 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:33.997842 | orchestrator | 2026-01-09 01:15:33.997852 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-09 01:15:33.997862 | orchestrator | Friday 09 January 2026 01:11:39 +0000 (0:00:05.634) 0:01:00.666 ******** 2026-01-09 01:15:33.997873 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:33.997883 | orchestrator | 2026-01-09 01:15:33.997895 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-09 01:15:33.997923 | orchestrator | Friday 09 January 2026 01:11:44 +0000 (0:00:05.691) 0:01:06.357 ******** 2026-01-09 01:15:33.997935 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:33.997946 | orchestrator | 2026-01-09 01:15:33.997956 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-09 01:15:33.997972 | orchestrator | Friday 09 January 2026 01:11:48 +0000 (0:00:03.627) 0:01:09.984 ******** 2026-01-09 01:15:33.997985 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-09 01:15:33.997996 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-09 01:15:33.998006 | orchestrator | 2026-01-09 01:15:33.998053 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-09 01:15:33.998066 | orchestrator | Friday 09 January 2026 01:12:00 +0000 (0:00:11.867) 0:01:21.852 ******** 2026-01-09 01:15:33.998077 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-09 01:15:33.998089 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-09 01:15:33.998100 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-09 01:15:33.998116 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-09 01:15:33.998123 | orchestrator | 2026-01-09 01:15:33.998131 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-09 01:15:33.998139 | orchestrator | Friday 09 January 2026 01:12:18 +0000 (0:00:18.021) 0:01:39.873 ******** 2026-01-09 01:15:33.998147 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:33.998154 | orchestrator | 2026-01-09 01:15:33.998162 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-09 01:15:33.998168 | orchestrator | Friday 09 January 2026 01:12:23 +0000 (0:00:04.687) 0:01:44.561 ******** 2026-01-09 01:15:33.998174 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:33.998180 | orchestrator | 2026-01-09 01:15:33.998187 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-09 01:15:33.998193 | orchestrator | Friday 09 January 2026 01:12:29 +0000 (0:00:06.037) 0:01:50.599 ******** 2026-01-09 01:15:33.998199 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:33.998205 | orchestrator | 2026-01-09 01:15:33.998212 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-09 01:15:33.998218 | orchestrator | Friday 09 January 2026 01:12:29 +0000 (0:00:00.371) 0:01:50.970 ******** 2026-01-09 01:15:33.998225 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:33.998231 | orchestrator | 2026-01-09 01:15:33.998237 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-09 01:15:33.998248 | orchestrator | Friday 09 January 2026 01:12:34 +0000 (0:00:04.856) 0:01:55.827 ******** 2026-01-09 01:15:33.998255 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:33.998261 | orchestrator | 2026-01-09 01:15:33.998286 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-09 01:15:33.998292 | orchestrator | Friday 09 January 2026 01:12:35 +0000 (0:00:01.046) 0:01:56.873 ******** 2026-01-09 01:15:33.998299 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:33.998305 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:33.998311 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:33.998317 | orchestrator | 2026-01-09 01:15:33.998323 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-09 01:15:33.998330 | orchestrator | Friday 09 January 2026 01:12:41 +0000 (0:00:05.573) 0:02:02.447 ******** 2026-01-09 01:15:33.998336 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:33.998343 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:33.998349 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:33.998355 | orchestrator | 2026-01-09 01:15:33.998361 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-09 01:15:33.998367 | orchestrator | Friday 09 January 2026 01:12:45 +0000 (0:00:04.497) 0:02:06.944 ******** 2026-01-09 01:15:33.998374 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:33.998380 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:33.998386 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:33.998392 | orchestrator | 2026-01-09 01:15:33.998398 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-09 01:15:33.998404 | orchestrator | Friday 09 January 2026 01:12:46 +0000 (0:00:00.783) 0:02:07.727 ******** 2026-01-09 01:15:33.998411 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:33.998417 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:15:33.998423 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:15:33.998429 | orchestrator | 2026-01-09 01:15:33.998435 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-09 01:15:33.998442 | orchestrator | Friday 09 January 2026 01:12:48 +0000 (0:00:02.404) 0:02:10.131 ******** 2026-01-09 01:15:33.998448 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:33.998454 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:33.998465 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:33.998471 | orchestrator | 2026-01-09 01:15:33.998477 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-09 01:15:33.998484 | orchestrator | Friday 09 January 2026 01:12:50 +0000 (0:00:01.641) 0:02:11.773 ******** 2026-01-09 01:15:33.998490 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:33.998496 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:33.998502 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:33.998508 | orchestrator | 2026-01-09 01:15:33.998515 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-09 01:15:33.998521 | orchestrator | Friday 09 January 2026 01:12:51 +0000 (0:00:01.343) 0:02:13.116 ******** 2026-01-09 01:15:33.998527 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:33.998533 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:33.998539 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:33.998546 | orchestrator | 2026-01-09 01:15:33.998566 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-09 01:15:33.998573 | orchestrator | Friday 09 January 2026 01:12:53 +0000 (0:00:02.202) 0:02:15.319 ******** 2026-01-09 01:15:33.998579 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:33.998585 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:33.998591 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:33.998597 | orchestrator | 2026-01-09 01:15:33.998604 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-09 01:15:33.998610 | orchestrator | Friday 09 January 2026 01:12:56 +0000 (0:00:02.166) 0:02:17.485 ******** 2026-01-09 01:15:33.998616 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:33.998622 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:15:33.998629 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:15:33.998635 | orchestrator | 2026-01-09 01:15:33.998641 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-09 01:15:33.998647 | orchestrator | Friday 09 January 2026 01:12:56 +0000 (0:00:00.724) 0:02:18.209 ******** 2026-01-09 01:15:33.998653 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:15:33.998659 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:33.998666 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:15:33.998672 | orchestrator | 2026-01-09 01:15:33.998678 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-09 01:15:33.998684 | orchestrator | Friday 09 January 2026 01:12:59 +0000 (0:00:02.674) 0:02:20.884 ******** 2026-01-09 01:15:33.998695 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:33.998711 | orchestrator | 2026-01-09 01:15:33.998723 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-09 01:15:33.998733 | orchestrator | Friday 09 January 2026 01:13:00 +0000 (0:00:00.681) 0:02:21.565 ******** 2026-01-09 01:15:33.998743 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:33.998754 | orchestrator | 2026-01-09 01:15:33.998763 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-09 01:15:33.998773 | orchestrator | Friday 09 January 2026 01:13:04 +0000 (0:00:04.247) 0:02:25.812 ******** 2026-01-09 01:15:33.998782 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:33.998791 | orchestrator | 2026-01-09 01:15:33.998800 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-09 01:15:33.998809 | orchestrator | Friday 09 January 2026 01:13:07 +0000 (0:00:03.526) 0:02:29.339 ******** 2026-01-09 01:15:33.998819 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-09 01:15:33.998828 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-09 01:15:33.998838 | orchestrator | 2026-01-09 01:15:33.998848 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-09 01:15:33.998858 | orchestrator | Friday 09 January 2026 01:13:15 +0000 (0:00:07.910) 0:02:37.249 ******** 2026-01-09 01:15:33.998868 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:33.998886 | orchestrator | 2026-01-09 01:15:33.998896 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-09 01:15:33.998912 | orchestrator | Friday 09 January 2026 01:13:18 +0000 (0:00:03.066) 0:02:40.315 ******** 2026-01-09 01:15:33.998922 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:15:33.998932 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:15:33.998941 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:15:33.998951 | orchestrator | 2026-01-09 01:15:33.998961 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-09 01:15:33.998972 | orchestrator | Friday 09 January 2026 01:13:19 +0000 (0:00:00.337) 0:02:40.653 ******** 2026-01-09 01:15:33.998985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:33.999009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:33.999022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:33.999034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:33.999058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:33.999069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:33.999081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999213 | orchestrator | 2026-01-09 01:15:33.999224 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-09 01:15:33.999235 | orchestrator | Friday 09 January 2026 01:13:21 +0000 (0:00:02.253) 0:02:42.907 ******** 2026-01-09 01:15:33.999243 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:33.999249 | orchestrator | 2026-01-09 01:15:33.999260 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-09 01:15:33.999410 | orchestrator | Friday 09 January 2026 01:13:21 +0000 (0:00:00.127) 0:02:43.035 ******** 2026-01-09 01:15:33.999419 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:33.999426 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:33.999432 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:33.999440 | orchestrator | 2026-01-09 01:15:33.999453 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-09 01:15:33.999469 | orchestrator | Friday 09 January 2026 01:13:22 +0000 (0:00:00.526) 0:02:43.562 ******** 2026-01-09 01:15:33.999481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 01:15:33.999503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 01:15:33.999520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 01:15:33.999532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 01:15:33.999545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:15:33.999556 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:33.999577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 01:15:33.999588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 01:15:33.999600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 01:15:33.999613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 01:15:33.999620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:15:33.999627 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:33.999634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 01:15:33.999652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 01:15:33.999664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 01:15:33.999682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 01:15:33.999697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:15:33.999709 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:33.999719 | orchestrator | 2026-01-09 01:15:33.999729 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-09 01:15:33.999755 | orchestrator | Friday 09 January 2026 01:13:22 +0000 (0:00:00.687) 0:02:44.249 ******** 2026-01-09 01:15:33.999767 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:15:33.999778 | orchestrator | 2026-01-09 01:15:33.999788 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-09 01:15:33.999798 | orchestrator | Friday 09 January 2026 01:13:23 +0000 (0:00:00.552) 0:02:44.802 ******** 2026-01-09 01:15:33.999809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:33.999830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:33.999848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:33.999858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:33.999874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:33.999885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:33.999896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:33.999995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000028 | orchestrator | 2026-01-09 01:15:34.000039 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-09 01:15:34.000052 | orchestrator | Friday 09 January 2026 01:13:29 +0000 (0:00:05.655) 0:02:50.457 ******** 2026-01-09 01:15:34.000063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 01:15:34.000075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 01:15:34.000090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:15:34.000125 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:34.000142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 01:15:34.000161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 01:15:34.000173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:15:34.000225 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:34.000235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 01:15:34.000252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 01:15:34.000290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:15:34.000328 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:34.000339 | orchestrator | 2026-01-09 01:15:34.000349 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-09 01:15:34.000358 | orchestrator | Friday 09 January 2026 01:13:30 +0000 (0:00:01.545) 0:02:52.002 ******** 2026-01-09 01:15:34.000374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 01:15:34.000384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 01:15:34.000400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:15:34.000438 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:34.000454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 01:15:34.000465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 01:15:34.000477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:15:34.000530 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:34.000537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-09 01:15:34.000544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-09 01:15:34.000554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-09 01:15:34.000571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-09 01:15:34.000578 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:34.000585 | orchestrator | 2026-01-09 01:15:34.000595 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-09 01:15:34.000605 | orchestrator | Friday 09 January 2026 01:13:31 +0000 (0:00:01.163) 0:02:53.166 ******** 2026-01-09 01:15:34.000617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:34.000625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:34.000634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:34.000646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:34.000653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:34.000660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:34.000671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000746 | orchestrator | 2026-01-09 01:15:34.000752 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-09 01:15:34.000759 | orchestrator | Friday 09 January 2026 01:13:36 +0000 (0:00:04.778) 0:02:57.945 ******** 2026-01-09 01:15:34.000765 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-09 01:15:34.000773 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-09 01:15:34.000779 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-09 01:15:34.000785 | orchestrator | 2026-01-09 01:15:34.000792 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-09 01:15:34.000802 | orchestrator | Friday 09 January 2026 01:13:38 +0000 (0:00:02.018) 0:02:59.963 ******** 2026-01-09 01:15:34.000812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:34.000819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:34.000894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:34.000903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:34.000910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:34.000925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:34.000932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:34.000999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:34.001011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:34.001022 | orchestrator | 2026-01-09 01:15:34.001033 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-09 01:15:34.001044 | orchestrator | Friday 09 January 2026 01:13:57 +0000 (0:00:18.795) 0:03:18.758 ******** 2026-01-09 01:15:34.001055 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:34.001066 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:34.001076 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:34.001086 | orchestrator | 2026-01-09 01:15:34.001097 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-09 01:15:34.001108 | orchestrator | Friday 09 January 2026 01:13:59 +0000 (0:00:01.731) 0:03:20.490 ******** 2026-01-09 01:15:34.001119 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-09 01:15:34.001130 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-09 01:15:34.001147 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-09 01:15:34.001158 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-09 01:15:34.001169 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-09 01:15:34.001181 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-09 01:15:34.001190 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-09 01:15:34.001201 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-09 01:15:34.001211 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-09 01:15:34.001222 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-09 01:15:34.001233 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-09 01:15:34.001244 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-09 01:15:34.001255 | orchestrator | 2026-01-09 01:15:34.001280 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-09 01:15:34.001301 | orchestrator | Friday 09 January 2026 01:14:04 +0000 (0:00:05.628) 0:03:26.118 ******** 2026-01-09 01:15:34.001310 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-09 01:15:34.001319 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-09 01:15:34.001330 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-09 01:15:34.001339 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-09 01:15:34.001350 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-09 01:15:34.001361 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-09 01:15:34.001371 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-09 01:15:34.001382 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-09 01:15:34.001392 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-09 01:15:34.001403 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-09 01:15:34.001414 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-09 01:15:34.001427 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-09 01:15:34.001438 | orchestrator | 2026-01-09 01:15:34.001449 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-09 01:15:34.001460 | orchestrator | Friday 09 January 2026 01:14:11 +0000 (0:00:06.779) 0:03:32.897 ******** 2026-01-09 01:15:34.001470 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-09 01:15:34.001496 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-09 01:15:34.001507 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-09 01:15:34.001525 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-09 01:15:34.001536 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-09 01:15:34.001546 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-09 01:15:34.001556 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-09 01:15:34.001567 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-09 01:15:34.001578 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-09 01:15:34.001588 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-09 01:15:34.001599 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-09 01:15:34.001608 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-09 01:15:34.001618 | orchestrator | 2026-01-09 01:15:34.001628 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-01-09 01:15:34.001638 | orchestrator | Friday 09 January 2026 01:14:18 +0000 (0:00:06.877) 0:03:39.775 ******** 2026-01-09 01:15:34.001650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:34.001673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:34.001703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-09 01:15:34.001715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:34.001731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:34.001742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-09 01:15:34.001753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.001771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.001791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.001802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.001812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.001826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-09 01:15:34.001838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:34.001848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:34.001872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-09 01:15:34.001883 | orchestrator | 2026-01-09 01:15:34.001894 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-09 01:15:34.001904 | orchestrator | Friday 09 January 2026 01:14:22 +0000 (0:00:04.077) 0:03:43.852 ******** 2026-01-09 01:15:34.001914 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:15:34.001926 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:15:34.001937 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:15:34.001948 | orchestrator | 2026-01-09 01:15:34.001960 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-09 01:15:34.001967 | orchestrator | Friday 09 January 2026 01:14:22 +0000 (0:00:00.316) 0:03:44.169 ******** 2026-01-09 01:15:34.001973 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:34.001980 | orchestrator | 2026-01-09 01:15:34.001991 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-09 01:15:34.002001 | orchestrator | Friday 09 January 2026 01:14:25 +0000 (0:00:02.459) 0:03:46.628 ******** 2026-01-09 01:15:34.002244 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:34.002296 | orchestrator | 2026-01-09 01:15:34.002311 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-09 01:15:34.002323 | orchestrator | Friday 09 January 2026 01:14:27 +0000 (0:00:02.572) 0:03:49.201 ******** 2026-01-09 01:15:34.002335 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:34.002345 | orchestrator | 2026-01-09 01:15:34.002356 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-09 01:15:34.002366 | orchestrator | Friday 09 January 2026 01:14:30 +0000 (0:00:02.816) 0:03:52.017 ******** 2026-01-09 01:15:34.002376 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:34.002385 | orchestrator | 2026-01-09 01:15:34.002396 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-09 01:15:34.002406 | orchestrator | Friday 09 January 2026 01:14:34 +0000 (0:00:03.390) 0:03:55.407 ******** 2026-01-09 01:15:34.002418 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:34.002429 | orchestrator | 2026-01-09 01:15:34.002439 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-09 01:15:34.002449 | orchestrator | Friday 09 January 2026 01:14:54 +0000 (0:00:20.129) 0:04:15.537 ******** 2026-01-09 01:15:34.002458 | orchestrator | 2026-01-09 01:15:34.002466 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-09 01:15:34.002476 | orchestrator | Friday 09 January 2026 01:14:54 +0000 (0:00:00.074) 0:04:15.611 ******** 2026-01-09 01:15:34.002487 | orchestrator | 2026-01-09 01:15:34.002497 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-09 01:15:34.002508 | orchestrator | Friday 09 January 2026 01:14:54 +0000 (0:00:00.071) 0:04:15.683 ******** 2026-01-09 01:15:34.002516 | orchestrator | 2026-01-09 01:15:34.002533 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-09 01:15:34.002543 | orchestrator | Friday 09 January 2026 01:14:54 +0000 (0:00:00.078) 0:04:15.762 ******** 2026-01-09 01:15:34.002551 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:34.002561 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:34.002570 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:34.002579 | orchestrator | 2026-01-09 01:15:34.002596 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-09 01:15:34.002605 | orchestrator | Friday 09 January 2026 01:15:05 +0000 (0:00:11.462) 0:04:27.224 ******** 2026-01-09 01:15:34.002615 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:34.002624 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:34.002632 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:34.002641 | orchestrator | 2026-01-09 01:15:34.002650 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-09 01:15:34.002659 | orchestrator | Friday 09 January 2026 01:15:11 +0000 (0:00:05.184) 0:04:32.408 ******** 2026-01-09 01:15:34.002668 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:34.002677 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:34.002686 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:34.002695 | orchestrator | 2026-01-09 01:15:34.002705 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-09 01:15:34.002714 | orchestrator | Friday 09 January 2026 01:15:16 +0000 (0:00:05.062) 0:04:37.471 ******** 2026-01-09 01:15:34.002726 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:34.002738 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:34.002747 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:34.002755 | orchestrator | 2026-01-09 01:15:34.002764 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-09 01:15:34.002773 | orchestrator | Friday 09 January 2026 01:15:21 +0000 (0:00:05.209) 0:04:42.680 ******** 2026-01-09 01:15:34.002781 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:15:34.002790 | orchestrator | changed: [testbed-node-1] 2026-01-09 01:15:34.002798 | orchestrator | changed: [testbed-node-2] 2026-01-09 01:15:34.002807 | orchestrator | 2026-01-09 01:15:34.002815 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:15:34.002825 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-09 01:15:34.002835 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-09 01:15:34.002844 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-09 01:15:34.002853 | orchestrator | 2026-01-09 01:15:34.002863 | orchestrator | 2026-01-09 01:15:34.002872 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:15:34.002881 | orchestrator | Friday 09 January 2026 01:15:32 +0000 (0:00:10.725) 0:04:53.406 ******** 2026-01-09 01:15:34.002904 | orchestrator | =============================================================================== 2026-01-09 01:15:34.002919 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.13s 2026-01-09 01:15:34.002929 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.80s 2026-01-09 01:15:34.002937 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.02s 2026-01-09 01:15:34.002945 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.38s 2026-01-09 01:15:34.002953 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.87s 2026-01-09 01:15:34.002961 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.46s 2026-01-09 01:15:34.002971 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.73s 2026-01-09 01:15:34.002981 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.15s 2026-01-09 01:15:34.002990 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.91s 2026-01-09 01:15:34.002999 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.32s 2026-01-09 01:15:34.003006 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.22s 2026-01-09 01:15:34.003011 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.88s 2026-01-09 01:15:34.003024 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.78s 2026-01-09 01:15:34.003031 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.04s 2026-01-09 01:15:34.003040 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.69s 2026-01-09 01:15:34.003049 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.66s 2026-01-09 01:15:34.003059 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.64s 2026-01-09 01:15:34.003070 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.63s 2026-01-09 01:15:34.003080 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.57s 2026-01-09 01:15:34.003090 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.21s 2026-01-09 01:15:34.003100 | orchestrator | 2026-01-09 01:15:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:15:37.049137 | orchestrator | 2026-01-09 01:15:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:15:40.087561 | orchestrator | 2026-01-09 01:15:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:15:43.136468 | orchestrator | 2026-01-09 01:15:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:15:46.179393 | orchestrator | 2026-01-09 01:15:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:15:49.229054 | orchestrator | 2026-01-09 01:15:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:15:52.276831 | orchestrator | 2026-01-09 01:15:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:15:55.321804 | orchestrator | 2026-01-09 01:15:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:15:58.369437 | orchestrator | 2026-01-09 01:15:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:01.420838 | orchestrator | 2026-01-09 01:16:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:04.468349 | orchestrator | 2026-01-09 01:16:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:07.513553 | orchestrator | 2026-01-09 01:16:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:10.557821 | orchestrator | 2026-01-09 01:16:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:13.597495 | orchestrator | 2026-01-09 01:16:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:16.640858 | orchestrator | 2026-01-09 01:16:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:19.684535 | orchestrator | 2026-01-09 01:16:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:22.726342 | orchestrator | 2026-01-09 01:16:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:25.766944 | orchestrator | 2026-01-09 01:16:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:28.810913 | orchestrator | 2026-01-09 01:16:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:31.861115 | orchestrator | 2026-01-09 01:16:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-09 01:16:34.905760 | orchestrator | 2026-01-09 01:16:35.253036 | orchestrator | 2026-01-09 01:16:35.257493 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Jan 9 01:16:35 UTC 2026 2026-01-09 01:16:35.257582 | orchestrator | 2026-01-09 01:16:35.749588 | orchestrator | ok: Runtime: 0:36:00.674458 2026-01-09 01:16:36.061397 | 2026-01-09 01:16:36.061552 | TASK [Bootstrap services] 2026-01-09 01:16:36.918990 | orchestrator | 2026-01-09 01:16:36.919174 | orchestrator | # BOOTSTRAP 2026-01-09 01:16:36.919193 | orchestrator | 2026-01-09 01:16:36.919203 | orchestrator | + set -e 2026-01-09 01:16:36.919211 | orchestrator | + echo 2026-01-09 01:16:36.919221 | orchestrator | + echo '# BOOTSTRAP' 2026-01-09 01:16:36.919233 | orchestrator | + echo 2026-01-09 01:16:36.919291 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-09 01:16:36.926890 | orchestrator | + set -e 2026-01-09 01:16:36.926976 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-09 01:16:41.840021 | orchestrator | 2026-01-09 01:16:41 | INFO  | It takes a moment until task 089c2ab9-5f78-40d7-8915-bf11c136f3ae (flavor-manager) has been started and output is visible here. 2026-01-09 01:16:50.298464 | orchestrator | 2026-01-09 01:16:45 | INFO  | Flavor SCS-1L-1 created 2026-01-09 01:16:50.298593 | orchestrator | 2026-01-09 01:16:45 | INFO  | Flavor SCS-1L-1-5 created 2026-01-09 01:16:50.298608 | orchestrator | 2026-01-09 01:16:45 | INFO  | Flavor SCS-1V-2 created 2026-01-09 01:16:50.298616 | orchestrator | 2026-01-09 01:16:46 | INFO  | Flavor SCS-1V-2-5 created 2026-01-09 01:16:50.298623 | orchestrator | 2026-01-09 01:16:46 | INFO  | Flavor SCS-1V-4 created 2026-01-09 01:16:50.298629 | orchestrator | 2026-01-09 01:16:46 | INFO  | Flavor SCS-1V-4-10 created 2026-01-09 01:16:50.298636 | orchestrator | 2026-01-09 01:16:46 | INFO  | Flavor SCS-1V-8 created 2026-01-09 01:16:50.298645 | orchestrator | 2026-01-09 01:16:46 | INFO  | Flavor SCS-1V-8-20 created 2026-01-09 01:16:50.298664 | orchestrator | 2026-01-09 01:16:47 | INFO  | Flavor SCS-2V-4 created 2026-01-09 01:16:50.298672 | orchestrator | 2026-01-09 01:16:47 | INFO  | Flavor SCS-2V-4-10 created 2026-01-09 01:16:50.298679 | orchestrator | 2026-01-09 01:16:47 | INFO  | Flavor SCS-2V-8 created 2026-01-09 01:16:50.298686 | orchestrator | 2026-01-09 01:16:47 | INFO  | Flavor SCS-2V-8-20 created 2026-01-09 01:16:50.298692 | orchestrator | 2026-01-09 01:16:47 | INFO  | Flavor SCS-2V-16 created 2026-01-09 01:16:50.298699 | orchestrator | 2026-01-09 01:16:47 | INFO  | Flavor SCS-2V-16-50 created 2026-01-09 01:16:50.298705 | orchestrator | 2026-01-09 01:16:47 | INFO  | Flavor SCS-4V-8 created 2026-01-09 01:16:50.298712 | orchestrator | 2026-01-09 01:16:48 | INFO  | Flavor SCS-4V-8-20 created 2026-01-09 01:16:50.298734 | orchestrator | 2026-01-09 01:16:48 | INFO  | Flavor SCS-4V-16 created 2026-01-09 01:16:50.298748 | orchestrator | 2026-01-09 01:16:48 | INFO  | Flavor SCS-4V-16-50 created 2026-01-09 01:16:50.298756 | orchestrator | 2026-01-09 01:16:48 | INFO  | Flavor SCS-4V-32 created 2026-01-09 01:16:50.298764 | orchestrator | 2026-01-09 01:16:48 | INFO  | Flavor SCS-4V-32-100 created 2026-01-09 01:16:50.298771 | orchestrator | 2026-01-09 01:16:48 | INFO  | Flavor SCS-8V-16 created 2026-01-09 01:16:50.298777 | orchestrator | 2026-01-09 01:16:49 | INFO  | Flavor SCS-8V-16-50 created 2026-01-09 01:16:50.298784 | orchestrator | 2026-01-09 01:16:49 | INFO  | Flavor SCS-8V-32 created 2026-01-09 01:16:50.298791 | orchestrator | 2026-01-09 01:16:49 | INFO  | Flavor SCS-8V-32-100 created 2026-01-09 01:16:50.298797 | orchestrator | 2026-01-09 01:16:49 | INFO  | Flavor SCS-16V-32 created 2026-01-09 01:16:50.298805 | orchestrator | 2026-01-09 01:16:49 | INFO  | Flavor SCS-16V-32-100 created 2026-01-09 01:16:50.298812 | orchestrator | 2026-01-09 01:16:49 | INFO  | Flavor SCS-2V-4-20s created 2026-01-09 01:16:50.298818 | orchestrator | 2026-01-09 01:16:49 | INFO  | Flavor SCS-4V-8-50s created 2026-01-09 01:16:50.298825 | orchestrator | 2026-01-09 01:16:50 | INFO  | Flavor SCS-8V-32-100s created 2026-01-09 01:16:52.674440 | orchestrator | 2026-01-09 01:16:52 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-09 01:17:02.814794 | orchestrator | 2026-01-09 01:17:02 | INFO  | Task 94dc03cb-2296-4d78-97ba-8d9b19c354d6 (bootstrap-basic) was prepared for execution. 2026-01-09 01:17:02.814913 | orchestrator | 2026-01-09 01:17:02 | INFO  | It takes a moment until task 94dc03cb-2296-4d78-97ba-8d9b19c354d6 (bootstrap-basic) has been started and output is visible here. 2026-01-09 01:17:50.213167 | orchestrator | 2026-01-09 01:17:50.213309 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-09 01:17:50.213320 | orchestrator | 2026-01-09 01:17:50.213325 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-09 01:17:50.213330 | orchestrator | Friday 09 January 2026 01:17:07 +0000 (0:00:00.065) 0:00:00.065 ******** 2026-01-09 01:17:50.213334 | orchestrator | ok: [localhost] 2026-01-09 01:17:50.213340 | orchestrator | 2026-01-09 01:17:50.213344 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-09 01:17:50.213348 | orchestrator | Friday 09 January 2026 01:17:09 +0000 (0:00:01.892) 0:00:01.958 ******** 2026-01-09 01:17:50.213352 | orchestrator | ok: [localhost] 2026-01-09 01:17:50.213355 | orchestrator | 2026-01-09 01:17:50.213359 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-09 01:17:50.213363 | orchestrator | Friday 09 January 2026 01:17:17 +0000 (0:00:08.788) 0:00:10.746 ******** 2026-01-09 01:17:50.213367 | orchestrator | changed: [localhost] 2026-01-09 01:17:50.213372 | orchestrator | 2026-01-09 01:17:50.213375 | orchestrator | TASK [Create public network] *************************************************** 2026-01-09 01:17:50.213380 | orchestrator | Friday 09 January 2026 01:17:25 +0000 (0:00:07.717) 0:00:18.463 ******** 2026-01-09 01:17:50.213384 | orchestrator | changed: [localhost] 2026-01-09 01:17:50.213388 | orchestrator | 2026-01-09 01:17:50.213392 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-09 01:17:50.213396 | orchestrator | Friday 09 January 2026 01:17:31 +0000 (0:00:05.727) 0:00:24.191 ******** 2026-01-09 01:17:50.213403 | orchestrator | changed: [localhost] 2026-01-09 01:17:50.213407 | orchestrator | 2026-01-09 01:17:50.213411 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-09 01:17:50.213415 | orchestrator | Friday 09 January 2026 01:17:37 +0000 (0:00:06.429) 0:00:30.620 ******** 2026-01-09 01:17:50.213419 | orchestrator | changed: [localhost] 2026-01-09 01:17:50.213423 | orchestrator | 2026-01-09 01:17:50.213427 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-09 01:17:50.213447 | orchestrator | Friday 09 January 2026 01:17:42 +0000 (0:00:04.622) 0:00:35.243 ******** 2026-01-09 01:17:50.213451 | orchestrator | changed: [localhost] 2026-01-09 01:17:50.213455 | orchestrator | 2026-01-09 01:17:50.213459 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-09 01:17:50.213471 | orchestrator | Friday 09 January 2026 01:17:46 +0000 (0:00:03.967) 0:00:39.210 ******** 2026-01-09 01:17:50.213475 | orchestrator | ok: [localhost] 2026-01-09 01:17:50.213479 | orchestrator | 2026-01-09 01:17:50.213483 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:17:50.213487 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:17:50.213493 | orchestrator | 2026-01-09 01:17:50.213499 | orchestrator | 2026-01-09 01:17:50.213505 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:17:50.213512 | orchestrator | Friday 09 January 2026 01:17:49 +0000 (0:00:03.535) 0:00:42.745 ******** 2026-01-09 01:17:50.213517 | orchestrator | =============================================================================== 2026-01-09 01:17:50.213523 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.79s 2026-01-09 01:17:50.213530 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.72s 2026-01-09 01:17:50.213536 | orchestrator | Set public network to default ------------------------------------------- 6.43s 2026-01-09 01:17:50.213543 | orchestrator | Create public network --------------------------------------------------- 5.73s 2026-01-09 01:17:50.213587 | orchestrator | Create public subnet ---------------------------------------------------- 4.62s 2026-01-09 01:17:50.213593 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.97s 2026-01-09 01:17:50.213597 | orchestrator | Create manager role ----------------------------------------------------- 3.54s 2026-01-09 01:17:50.213601 | orchestrator | Gathering Facts --------------------------------------------------------- 1.89s 2026-01-09 01:17:52.825870 | orchestrator | 2026-01-09 01:17:52 | INFO  | It takes a moment until task 2abad8c1-8304-4aa4-9e35-85a9e0e6fcc8 (image-manager) has been started and output is visible here. 2026-01-09 01:18:34.535747 | orchestrator | 2026-01-09 01:17:55 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-09 01:18:34.535842 | orchestrator | 2026-01-09 01:17:55 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-09 01:18:34.535852 | orchestrator | 2026-01-09 01:17:55 | INFO  | Importing image Cirros 0.6.2 2026-01-09 01:18:34.535859 | orchestrator | 2026-01-09 01:17:55 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-09 01:18:34.535870 | orchestrator | 2026-01-09 01:17:58 | INFO  | Waiting for image to leave queued state... 2026-01-09 01:18:34.535881 | orchestrator | 2026-01-09 01:18:00 | INFO  | Waiting for import to complete... 2026-01-09 01:18:34.535889 | orchestrator | 2026-01-09 01:18:10 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-09 01:18:34.535896 | orchestrator | 2026-01-09 01:18:10 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-09 01:18:34.535902 | orchestrator | 2026-01-09 01:18:10 | INFO  | Setting internal_version = 0.6.2 2026-01-09 01:18:34.535910 | orchestrator | 2026-01-09 01:18:10 | INFO  | Setting image_original_user = cirros 2026-01-09 01:18:34.535917 | orchestrator | 2026-01-09 01:18:10 | INFO  | Adding tag os:cirros 2026-01-09 01:18:34.535924 | orchestrator | 2026-01-09 01:18:10 | INFO  | Setting property architecture: x86_64 2026-01-09 01:18:34.535930 | orchestrator | 2026-01-09 01:18:11 | INFO  | Setting property hw_disk_bus: scsi 2026-01-09 01:18:34.535936 | orchestrator | 2026-01-09 01:18:11 | INFO  | Setting property hw_rng_model: virtio 2026-01-09 01:18:34.535942 | orchestrator | 2026-01-09 01:18:11 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-09 01:18:34.535948 | orchestrator | 2026-01-09 01:18:11 | INFO  | Setting property hw_watchdog_action: reset 2026-01-09 01:18:34.535954 | orchestrator | 2026-01-09 01:18:12 | INFO  | Setting property hypervisor_type: qemu 2026-01-09 01:18:34.535960 | orchestrator | 2026-01-09 01:18:12 | INFO  | Setting property os_distro: cirros 2026-01-09 01:18:34.535966 | orchestrator | 2026-01-09 01:18:12 | INFO  | Setting property os_purpose: minimal 2026-01-09 01:18:34.535972 | orchestrator | 2026-01-09 01:18:12 | INFO  | Setting property replace_frequency: never 2026-01-09 01:18:34.535979 | orchestrator | 2026-01-09 01:18:13 | INFO  | Setting property uuid_validity: none 2026-01-09 01:18:34.535985 | orchestrator | 2026-01-09 01:18:13 | INFO  | Setting property provided_until: none 2026-01-09 01:18:34.535992 | orchestrator | 2026-01-09 01:18:13 | INFO  | Setting property image_description: Cirros 2026-01-09 01:18:34.535999 | orchestrator | 2026-01-09 01:18:13 | INFO  | Setting property image_name: Cirros 2026-01-09 01:18:34.536005 | orchestrator | 2026-01-09 01:18:13 | INFO  | Setting property internal_version: 0.6.2 2026-01-09 01:18:34.536011 | orchestrator | 2026-01-09 01:18:14 | INFO  | Setting property image_original_user: cirros 2026-01-09 01:18:34.536042 | orchestrator | 2026-01-09 01:18:14 | INFO  | Setting property os_version: 0.6.2 2026-01-09 01:18:34.536056 | orchestrator | 2026-01-09 01:18:14 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-09 01:18:34.536062 | orchestrator | 2026-01-09 01:18:14 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-09 01:18:34.536066 | orchestrator | 2026-01-09 01:18:15 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-09 01:18:34.536069 | orchestrator | 2026-01-09 01:18:15 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-09 01:18:34.536073 | orchestrator | 2026-01-09 01:18:15 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-09 01:18:34.536077 | orchestrator | 2026-01-09 01:18:15 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-09 01:18:34.536085 | orchestrator | 2026-01-09 01:18:15 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-09 01:18:34.536089 | orchestrator | 2026-01-09 01:18:15 | INFO  | Importing image Cirros 0.6.3 2026-01-09 01:18:34.536093 | orchestrator | 2026-01-09 01:18:15 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-09 01:18:34.536096 | orchestrator | 2026-01-09 01:18:16 | INFO  | Waiting for image to leave queued state... 2026-01-09 01:18:34.536100 | orchestrator | 2026-01-09 01:18:18 | INFO  | Waiting for import to complete... 2026-01-09 01:18:34.536116 | orchestrator | 2026-01-09 01:18:28 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-09 01:18:34.536121 | orchestrator | 2026-01-09 01:18:29 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-09 01:18:34.536124 | orchestrator | 2026-01-09 01:18:29 | INFO  | Setting internal_version = 0.6.3 2026-01-09 01:18:34.536128 | orchestrator | 2026-01-09 01:18:29 | INFO  | Setting image_original_user = cirros 2026-01-09 01:18:34.536132 | orchestrator | 2026-01-09 01:18:29 | INFO  | Adding tag os:cirros 2026-01-09 01:18:34.536136 | orchestrator | 2026-01-09 01:18:29 | INFO  | Setting property architecture: x86_64 2026-01-09 01:18:34.536139 | orchestrator | 2026-01-09 01:18:29 | INFO  | Setting property hw_disk_bus: scsi 2026-01-09 01:18:34.536143 | orchestrator | 2026-01-09 01:18:29 | INFO  | Setting property hw_rng_model: virtio 2026-01-09 01:18:34.536147 | orchestrator | 2026-01-09 01:18:30 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-09 01:18:34.536151 | orchestrator | 2026-01-09 01:18:30 | INFO  | Setting property hw_watchdog_action: reset 2026-01-09 01:18:34.536154 | orchestrator | 2026-01-09 01:18:30 | INFO  | Setting property hypervisor_type: qemu 2026-01-09 01:18:34.536159 | orchestrator | 2026-01-09 01:18:30 | INFO  | Setting property os_distro: cirros 2026-01-09 01:18:34.536163 | orchestrator | 2026-01-09 01:18:30 | INFO  | Setting property os_purpose: minimal 2026-01-09 01:18:34.536166 | orchestrator | 2026-01-09 01:18:31 | INFO  | Setting property replace_frequency: never 2026-01-09 01:18:34.536170 | orchestrator | 2026-01-09 01:18:31 | INFO  | Setting property uuid_validity: none 2026-01-09 01:18:34.536174 | orchestrator | 2026-01-09 01:18:31 | INFO  | Setting property provided_until: none 2026-01-09 01:18:34.536178 | orchestrator | 2026-01-09 01:18:31 | INFO  | Setting property image_description: Cirros 2026-01-09 01:18:34.536181 | orchestrator | 2026-01-09 01:18:32 | INFO  | Setting property image_name: Cirros 2026-01-09 01:18:34.536185 | orchestrator | 2026-01-09 01:18:32 | INFO  | Setting property internal_version: 0.6.3 2026-01-09 01:18:34.536193 | orchestrator | 2026-01-09 01:18:32 | INFO  | Setting property image_original_user: cirros 2026-01-09 01:18:34.536197 | orchestrator | 2026-01-09 01:18:32 | INFO  | Setting property os_version: 0.6.3 2026-01-09 01:18:34.536201 | orchestrator | 2026-01-09 01:18:32 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-09 01:18:34.536205 | orchestrator | 2026-01-09 01:18:33 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-09 01:18:34.536208 | orchestrator | 2026-01-09 01:18:33 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-09 01:18:34.536212 | orchestrator | 2026-01-09 01:18:33 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-09 01:18:34.536216 | orchestrator | 2026-01-09 01:18:33 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-09 01:18:34.861543 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-09 01:18:37.203173 | orchestrator | 2026-01-09 01:18:37 | INFO  | date: 2026-01-08 2026-01-09 01:18:37.203329 | orchestrator | 2026-01-09 01:18:37 | INFO  | image: octavia-amphora-haproxy-2024.2.20260108.qcow2 2026-01-09 01:18:37.203381 | orchestrator | 2026-01-09 01:18:37 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260108.qcow2 2026-01-09 01:18:37.203603 | orchestrator | 2026-01-09 01:18:37 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260108.qcow2.CHECKSUM 2026-01-09 01:18:37.332536 | orchestrator | 2026-01-09 01:18:37 | INFO  | checksum: 171e279ef3b6285472a1abe27329fe35afe8602fafcc1133efbc4bfb68ff70dc 2026-01-09 01:18:37.419475 | orchestrator | 2026-01-09 01:18:37 | INFO  | It takes a moment until task 39c1edcf-fcec-45d2-b1b8-12f27666e556 (image-manager) has been started and output is visible here. 2026-01-09 01:20:29.353907 | orchestrator | 2026-01-09 01:18:39 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-08' 2026-01-09 01:20:29.354112 | orchestrator | 2026-01-09 01:18:39 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260108.qcow2: 200 2026-01-09 01:20:29.354133 | orchestrator | 2026-01-09 01:18:39 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-08 2026-01-09 01:20:29.354141 | orchestrator | 2026-01-09 01:18:39 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260108.qcow2 2026-01-09 01:20:29.354149 | orchestrator | 2026-01-09 01:18:41 | INFO  | Waiting for image to leave queued state... 2026-01-09 01:20:29.354156 | orchestrator | 2026-01-09 01:18:43 | INFO  | Waiting for import to complete... 2026-01-09 01:20:29.354163 | orchestrator | 2026-01-09 01:18:53 | INFO  | Waiting for import to complete... 2026-01-09 01:20:29.354172 | orchestrator | 2026-01-09 01:19:03 | INFO  | Waiting for import to complete... 2026-01-09 01:20:29.354182 | orchestrator | 2026-01-09 01:19:13 | INFO  | Waiting for import to complete... 2026-01-09 01:20:29.354195 | orchestrator | 2026-01-09 01:19:23 | INFO  | Waiting for import to complete... 2026-01-09 01:20:29.354233 | orchestrator | 2026-01-09 01:19:33 | INFO  | Waiting for import to complete... 2026-01-09 01:20:29.354245 | orchestrator | 2026-01-09 01:19:43 | INFO  | Waiting for import to complete... 2026-01-09 01:20:29.354255 | orchestrator | 2026-01-09 01:19:53 | INFO  | Waiting for import to complete... 2026-01-09 01:20:29.354264 | orchestrator | 2026-01-09 01:20:04 | INFO  | Waiting for import to complete... 2026-01-09 01:20:29.354299 | orchestrator | 2026-01-09 01:20:14 | INFO  | Waiting for import to complete... 2026-01-09 01:20:29.354311 | orchestrator | 2026-01-09 01:20:24 | INFO  | Import of 'OpenStack Octavia Amphora 2026-01-08' successfully completed, reloading images 2026-01-09 01:20:29.354324 | orchestrator | 2026-01-09 01:20:24 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-01-08' 2026-01-09 01:20:29.354335 | orchestrator | 2026-01-09 01:20:24 | INFO  | Setting internal_version = 2026-01-08 2026-01-09 01:20:29.354347 | orchestrator | 2026-01-09 01:20:24 | INFO  | Setting image_original_user = ubuntu 2026-01-09 01:20:29.354359 | orchestrator | 2026-01-09 01:20:24 | INFO  | Adding tag amphora 2026-01-09 01:20:29.354371 | orchestrator | 2026-01-09 01:20:24 | INFO  | Adding tag os:ubuntu 2026-01-09 01:20:29.354382 | orchestrator | 2026-01-09 01:20:25 | INFO  | Setting property architecture: x86_64 2026-01-09 01:20:29.354392 | orchestrator | 2026-01-09 01:20:25 | INFO  | Setting property hw_disk_bus: scsi 2026-01-09 01:20:29.354408 | orchestrator | 2026-01-09 01:20:25 | INFO  | Setting property hw_rng_model: virtio 2026-01-09 01:20:29.354422 | orchestrator | 2026-01-09 01:20:25 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-09 01:20:29.354432 | orchestrator | 2026-01-09 01:20:25 | INFO  | Setting property hw_watchdog_action: reset 2026-01-09 01:20:29.354442 | orchestrator | 2026-01-09 01:20:26 | INFO  | Setting property hypervisor_type: qemu 2026-01-09 01:20:29.354450 | orchestrator | 2026-01-09 01:20:26 | INFO  | Setting property os_distro: ubuntu 2026-01-09 01:20:29.354460 | orchestrator | 2026-01-09 01:20:26 | INFO  | Setting property replace_frequency: quarterly 2026-01-09 01:20:29.354486 | orchestrator | 2026-01-09 01:20:26 | INFO  | Setting property uuid_validity: last-1 2026-01-09 01:20:29.354496 | orchestrator | 2026-01-09 01:20:26 | INFO  | Setting property provided_until: none 2026-01-09 01:20:29.354505 | orchestrator | 2026-01-09 01:20:27 | INFO  | Setting property os_purpose: network 2026-01-09 01:20:29.354514 | orchestrator | 2026-01-09 01:20:27 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-01-09 01:20:29.354524 | orchestrator | 2026-01-09 01:20:27 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-01-09 01:20:29.354535 | orchestrator | 2026-01-09 01:20:27 | INFO  | Setting property internal_version: 2026-01-08 2026-01-09 01:20:29.354545 | orchestrator | 2026-01-09 01:20:28 | INFO  | Setting property image_original_user: ubuntu 2026-01-09 01:20:29.354555 | orchestrator | 2026-01-09 01:20:28 | INFO  | Setting property os_version: 2026-01-08 2026-01-09 01:20:29.354565 | orchestrator | 2026-01-09 01:20:28 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260108.qcow2 2026-01-09 01:20:29.354597 | orchestrator | 2026-01-09 01:20:28 | INFO  | Setting property image_build_date: 2026-01-08 2026-01-09 01:20:29.354609 | orchestrator | 2026-01-09 01:20:28 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-01-08' 2026-01-09 01:20:29.354620 | orchestrator | 2026-01-09 01:20:28 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-01-08' 2026-01-09 01:20:29.354628 | orchestrator | 2026-01-09 01:20:29 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-09 01:20:29.354635 | orchestrator | 2026-01-09 01:20:29 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-09 01:20:29.354644 | orchestrator | 2026-01-09 01:20:29 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-09 01:20:29.354660 | orchestrator | 2026-01-09 01:20:29 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-09 01:20:29.856263 | orchestrator | ok: Runtime: 0:03:53.176665 2026-01-09 01:20:29.879548 | 2026-01-09 01:20:29.879753 | TASK [Run checks] 2026-01-09 01:20:30.648830 | orchestrator | + set -e 2026-01-09 01:20:30.649005 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-09 01:20:30.649020 | orchestrator | ++ export INTERACTIVE=false 2026-01-09 01:20:30.649030 | orchestrator | ++ INTERACTIVE=false 2026-01-09 01:20:30.649036 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-09 01:20:30.649041 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-09 01:20:30.649047 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-09 01:20:30.649592 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-09 01:20:30.655288 | orchestrator | 2026-01-09 01:20:30.655350 | orchestrator | # CHECK 2026-01-09 01:20:30.655356 | orchestrator | 2026-01-09 01:20:30.655361 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-09 01:20:30.655369 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-09 01:20:30.655376 | orchestrator | + echo 2026-01-09 01:20:30.655382 | orchestrator | + echo '# CHECK' 2026-01-09 01:20:30.655388 | orchestrator | + echo 2026-01-09 01:20:30.655398 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-09 01:20:30.656783 | orchestrator | ++ semver latest 5.0.0 2026-01-09 01:20:30.725794 | orchestrator | 2026-01-09 01:20:30.725883 | orchestrator | ## Containers @ testbed-manager 2026-01-09 01:20:30.725892 | orchestrator | 2026-01-09 01:20:30.725900 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-09 01:20:30.725905 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-09 01:20:30.725910 | orchestrator | + echo 2026-01-09 01:20:30.725915 | orchestrator | + echo '## Containers @ testbed-manager' 2026-01-09 01:20:30.725920 | orchestrator | + echo 2026-01-09 01:20:30.725925 | orchestrator | + osism container testbed-manager ps 2026-01-09 01:20:32.953799 | orchestrator | 2026-01-09 01:20:32 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-01-09 01:20:33.345072 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-09 01:20:33.345184 | orchestrator | 5724048111b2 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 12 minutes prometheus_blackbox_exporter 2026-01-09 01:20:33.345260 | orchestrator | e0b85f813aae registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2026-01-09 01:20:33.345270 | orchestrator | daeaf9fdff14 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-01-09 01:20:33.345279 | orchestrator | 21bcdf146dd0 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-01-09 01:20:33.345296 | orchestrator | e1c84516ee22 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2026-01-09 01:20:33.345303 | orchestrator | a3673ec5b53f registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 19 minutes ago Up 18 minutes cephclient 2026-01-09 01:20:33.345308 | orchestrator | ea21725a31d5 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-01-09 01:20:33.345313 | orchestrator | 67a84b506188 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2026-01-09 01:20:33.346874 | orchestrator | fb7ecf32cdd6 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-01-09 01:20:33.346922 | orchestrator | a68116e599f2 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 33 minutes ago Up 32 minutes (healthy) 80/tcp phpmyadmin 2026-01-09 01:20:33.346929 | orchestrator | c519e7a77dde registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 34 minutes ago Up 33 minutes openstackclient 2026-01-09 01:20:33.346936 | orchestrator | 5e03b3544642 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 34 minutes ago Up 33 minutes (healthy) 8080/tcp homer 2026-01-09 01:20:33.346942 | orchestrator | b92e0642972c registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 58 minutes ago Up 57 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-01-09 01:20:33.346950 | orchestrator | 8f49f2986cbe registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 40 minutes (healthy) manager-inventory_reconciler-1 2026-01-09 01:20:33.346957 | orchestrator | 5812a01c20c6 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) osism-ansible 2026-01-09 01:20:33.348386 | orchestrator | a55f9b249e42 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) kolla-ansible 2026-01-09 01:20:33.348420 | orchestrator | c4cf51a3d454 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) osism-kubernetes 2026-01-09 01:20:33.348424 | orchestrator | ca1527a8c97d registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) ceph-ansible 2026-01-09 01:20:33.348429 | orchestrator | 578b6410cd85 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 41 minutes (healthy) 8000/tcp manager-ara-server-1 2026-01-09 01:20:33.348433 | orchestrator | ba1ba9bfab45 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 41 minutes (healthy) osismclient 2026-01-09 01:20:33.348437 | orchestrator | a2789ca89ce5 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-01-09 01:20:33.348440 | orchestrator | 27f6779f3fdf registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 6379/tcp manager-redis-1 2026-01-09 01:20:33.348444 | orchestrator | 6833dafb99ca registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 41 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-01-09 01:20:33.348471 | orchestrator | 7a9ddb65cc38 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-openstack-1 2026-01-09 01:20:33.348475 | orchestrator | 400cf0458f65 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-beat-1 2026-01-09 01:20:33.348479 | orchestrator | e76a59ba0200 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-flower-1 2026-01-09 01:20:33.348483 | orchestrator | 2a2c50c90c4f registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 3306/tcp manager-mariadb-1 2026-01-09 01:20:33.348487 | orchestrator | d9f79c3d9f94 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-listener-1 2026-01-09 01:20:33.348491 | orchestrator | 27f075c42b3a registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-01-09 01:20:33.684733 | orchestrator | 2026-01-09 01:20:33.684857 | orchestrator | ## Images @ testbed-manager 2026-01-09 01:20:33.684869 | orchestrator | 2026-01-09 01:20:33.684876 | orchestrator | + echo 2026-01-09 01:20:33.684883 | orchestrator | + echo '## Images @ testbed-manager' 2026-01-09 01:20:33.684891 | orchestrator | + echo 2026-01-09 01:20:33.684901 | orchestrator | + osism container testbed-manager images 2026-01-09 01:20:36.180040 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-09 01:20:36.181526 | orchestrator | registry.osism.tech/osism/osism-ansible latest f8032e80517a About an hour ago 611MB 2026-01-09 01:20:36.181561 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 59767617667e About an hour ago 610MB 2026-01-09 01:20:36.181569 | orchestrator | registry.osism.tech/osism/ceph-ansible reef a887e9224152 About an hour ago 560MB 2026-01-09 01:20:36.181576 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 594ddb5c9047 About an hour ago 1.23GB 2026-01-09 01:20:36.181583 | orchestrator | registry.osism.tech/osism/osism latest 0c15e81e8958 About an hour ago 384MB 2026-01-09 01:20:36.181589 | orchestrator | registry.osism.tech/osism/osism-frontend latest e216f1a5963b About an hour ago 239MB 2026-01-09 01:20:36.181596 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 6d084634306c About an hour ago 335MB 2026-01-09 01:20:36.181602 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 7822d2d3ec1f 22 hours ago 238MB 2026-01-09 01:20:36.181609 | orchestrator | registry.osism.tech/osism/cephclient reef 95a09b837137 22 hours ago 453MB 2026-01-09 01:20:36.181617 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 a0a2854fc448 23 hours ago 675MB 2026-01-09 01:20:36.181623 | orchestrator | registry.osism.tech/kolla/cron 2024.2 8926fd73257f 23 hours ago 271MB 2026-01-09 01:20:36.181630 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2fa0daf17a55 23 hours ago 584MB 2026-01-09 01:20:36.181636 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 a64be3f78a6c 23 hours ago 409MB 2026-01-09 01:20:36.181678 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9e9b2a749855 23 hours ago 311MB 2026-01-09 01:20:36.181689 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 e64bfab67754 23 hours ago 313MB 2026-01-09 01:20:36.181697 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 da8326fbdfa0 23 hours ago 362MB 2026-01-09 01:20:36.181704 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 2af17dd5e854 23 hours ago 844MB 2026-01-09 01:20:36.181710 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 5 weeks ago 11.5MB 2026-01-09 01:20:36.181716 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 8 weeks ago 334MB 2026-01-09 01:20:36.181723 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine 13105d2858de 2 months ago 41.4MB 2026-01-09 01:20:36.181729 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 months ago 742MB 2026-01-09 01:20:36.181736 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 4 months ago 275MB 2026-01-09 01:20:36.181742 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 5 months ago 226MB 2026-01-09 01:20:36.181748 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 19 months ago 146MB 2026-01-09 01:20:36.541981 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-09 01:20:36.543699 | orchestrator | ++ semver latest 5.0.0 2026-01-09 01:20:36.599776 | orchestrator | 2026-01-09 01:20:36.599865 | orchestrator | ## Containers @ testbed-node-0 2026-01-09 01:20:36.599873 | orchestrator | 2026-01-09 01:20:36.599879 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-09 01:20:36.599884 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-09 01:20:36.599890 | orchestrator | + echo 2026-01-09 01:20:36.599895 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-01-09 01:20:36.599901 | orchestrator | + echo 2026-01-09 01:20:36.599906 | orchestrator | + osism container testbed-node-0 ps 2026-01-09 01:20:39.105900 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-09 01:20:39.106050 | orchestrator | f070b4a70878 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_worker 2026-01-09 01:20:39.106069 | orchestrator | f349c7aaa3f2 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2026-01-09 01:20:39.106074 | orchestrator | 0d88a7d5229b registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2026-01-09 01:20:39.106101 | orchestrator | ec1e16d365ca registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-01-09 01:20:39.106110 | orchestrator | aed5224df7b0 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-09 01:20:39.106118 | orchestrator | b0b1abbcfbcf registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-01-09 01:20:39.106124 | orchestrator | 75c8a6a3af2c registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-01-09 01:20:39.106131 | orchestrator | cc671722c38c registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-01-09 01:20:39.106170 | orchestrator | 25e8e646a593 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-01-09 01:20:39.106193 | orchestrator | 465d9b418524 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes grafana 2026-01-09 01:20:39.106243 | orchestrator | b1da9d16ce3c registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_backup 2026-01-09 01:20:39.106251 | orchestrator | 1bf9d3a3b85d registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-01-09 01:20:39.106258 | orchestrator | 00cced658d00 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2026-01-09 01:20:39.106264 | orchestrator | b91f954b68f4 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-01-09 01:20:39.106271 | orchestrator | 4375a408a1da registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2026-01-09 01:20:39.106278 | orchestrator | 840aa438e914 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-01-09 01:20:39.106285 | orchestrator | 6311c8aa364e registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-01-09 01:20:39.106291 | orchestrator | 38efe172a530 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-01-09 01:20:39.106298 | orchestrator | aa52e4e3a4b8 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-01-09 01:20:39.106304 | orchestrator | 945ea01bb49e registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-01-09 01:20:39.106309 | orchestrator | 5f037444c308 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2026-01-09 01:20:39.106328 | orchestrator | b50c7b6f3b62 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_api 2026-01-09 01:20:39.106338 | orchestrator | 0c6ae5a44e60 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2026-01-09 01:20:39.106342 | orchestrator | f817b9c9ac45 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-01-09 01:20:39.106345 | orchestrator | 2d342e557ae7 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-01-09 01:20:39.107749 | orchestrator | 4398cd68cc65 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-01-09 01:20:39.107817 | orchestrator | 0b83a49cf67b registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2026-01-09 01:20:39.107825 | orchestrator | eec3d2534b15 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2026-01-09 01:20:39.107858 | orchestrator | 0a48f26c9794 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2026-01-09 01:20:39.107866 | orchestrator | ca707df6a10e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-01-09 01:20:39.107873 | orchestrator | 5fc2708c3315 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-01-09 01:20:39.107879 | orchestrator | 74735c63fa20 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-01-09 01:20:39.107885 | orchestrator | dff841dad124 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2026-01-09 01:20:39.107893 | orchestrator | 25a527f9dd7a registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-01-09 01:20:39.107900 | orchestrator | 7ef8751a5b12 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-01-09 01:20:39.107905 | orchestrator | 672c52f27fa4 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-01-09 01:20:39.107912 | orchestrator | 534212cc02bf registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-01-09 01:20:39.107918 | orchestrator | 1b46d47b8077 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2026-01-09 01:20:39.107924 | orchestrator | 176acef63009 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-01-09 01:20:39.107930 | orchestrator | 212200abf491 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2026-01-09 01:20:39.107936 | orchestrator | bc693c91de2f registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-01-09 01:20:39.107942 | orchestrator | 2f6c0c444697 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2026-01-09 01:20:39.107949 | orchestrator | fa8469ffe807 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2026-01-09 01:20:39.107955 | orchestrator | b5860ecfcf8c registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-01-09 01:20:39.107976 | orchestrator | 10acc6163dca registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2026-01-09 01:20:39.107982 | orchestrator | 7b4e96139423 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2026-01-09 01:20:39.107997 | orchestrator | 27e7f256eb14 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2026-01-09 01:20:39.108010 | orchestrator | 17bd4542f9f7 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2026-01-09 01:20:39.108017 | orchestrator | 2073387bea78 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-0 2026-01-09 01:20:39.108032 | orchestrator | 8f13625a2e72 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-01-09 01:20:39.108038 | orchestrator | 540ded1fe7bf registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2026-01-09 01:20:39.108045 | orchestrator | 074369c9407e registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-01-09 01:20:39.108050 | orchestrator | ddd009f74485 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2026-01-09 01:20:39.108056 | orchestrator | f0f94a007b97 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2026-01-09 01:20:39.108062 | orchestrator | 64de23e5d823 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2026-01-09 01:20:39.108067 | orchestrator | 5eb88a08f82f registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-01-09 01:20:39.108073 | orchestrator | d4b078e34c0f registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-01-09 01:20:39.108079 | orchestrator | ce23f99a848e registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2026-01-09 01:20:39.108084 | orchestrator | ef3f1f66544f registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-01-09 01:20:39.449569 | orchestrator | 2026-01-09 01:20:39.449677 | orchestrator | ## Images @ testbed-node-0 2026-01-09 01:20:39.449691 | orchestrator | 2026-01-09 01:20:39.449700 | orchestrator | + echo 2026-01-09 01:20:39.449709 | orchestrator | + echo '## Images @ testbed-node-0' 2026-01-09 01:20:39.449718 | orchestrator | + echo 2026-01-09 01:20:39.449727 | orchestrator | + osism container testbed-node-0 images 2026-01-09 01:20:41.988513 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-09 01:20:41.988647 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 024dffda1c70 22 hours ago 1.27GB 2026-01-09 01:20:41.988688 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b26609acb4a5 23 hours ago 1.56GB 2026-01-09 01:20:41.988698 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 d47de75b0f3e 23 hours ago 1.53GB 2026-01-09 01:20:41.988707 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 a0a2854fc448 23 hours ago 675MB 2026-01-09 01:20:41.988715 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 831a162f22d5 23 hours ago 279MB 2026-01-09 01:20:41.988724 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 1259e039edf3 23 hours ago 1.02GB 2026-01-09 01:20:41.988733 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f527a3280384 23 hours ago 328MB 2026-01-09 01:20:41.988741 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 adcbac3215aa 23 hours ago 417MB 2026-01-09 01:20:41.988749 | orchestrator | registry.osism.tech/kolla/cron 2024.2 8926fd73257f 23 hours ago 271MB 2026-01-09 01:20:41.988780 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 9f0676474158 23 hours ago 271MB 2026-01-09 01:20:41.988788 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 c5d59be14713 23 hours ago 282MB 2026-01-09 01:20:41.988796 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2fa0daf17a55 23 hours ago 584MB 2026-01-09 01:20:41.988804 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 9500faa9f5e4 23 hours ago 1.15GB 2026-01-09 01:20:41.988812 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 7bfd72b2e931 23 hours ago 304MB 2026-01-09 01:20:41.988819 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 cb81d3c1a304 23 hours ago 297MB 2026-01-09 01:20:41.988827 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9e9b2a749855 23 hours ago 311MB 2026-01-09 01:20:41.988835 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 f4f88ddcf361 23 hours ago 306MB 2026-01-09 01:20:41.988843 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 da8326fbdfa0 23 hours ago 362MB 2026-01-09 01:20:41.988850 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 9aa8aff9caf2 23 hours ago 457MB 2026-01-09 01:20:41.988858 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 4719af68f793 23 hours ago 284MB 2026-01-09 01:20:41.988866 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 e501b53579f2 23 hours ago 284MB 2026-01-09 01:20:41.988874 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d7b0f4dd4330 23 hours ago 278MB 2026-01-09 01:20:41.988882 | orchestrator | registry.osism.tech/kolla/redis 2024.2 ed6f07ffbe42 23 hours ago 278MB 2026-01-09 01:20:41.988889 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 5a98bfb47b80 23 hours ago 996MB 2026-01-09 01:20:41.988897 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 dfafc9f7afb1 23 hours ago 995MB 2026-01-09 01:20:41.988905 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2b3e3a0e91f0 23 hours ago 996MB 2026-01-09 01:20:41.988913 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 0ad56ed5fd77 23 hours ago 1.17GB 2026-01-09 01:20:41.988920 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 06ef6d50d22e 23 hours ago 1.03GB 2026-01-09 01:20:41.988929 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 71d9786b448a 23 hours ago 1.06GB 2026-01-09 01:20:41.988936 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 350ee0f05475 23 hours ago 1.03GB 2026-01-09 01:20:41.988944 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 8c434086e541 23 hours ago 1.03GB 2026-01-09 01:20:41.988952 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 84ca48dc448a 23 hours ago 1.06GB 2026-01-09 01:20:41.988959 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 7af4915d7d55 23 hours ago 1.1GB 2026-01-09 01:20:41.988967 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 fc2a752737ed 23 hours ago 1.41GB 2026-01-09 01:20:41.988975 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 0e868106e8a0 23 hours ago 1.42GB 2026-01-09 01:20:41.988983 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 f554233359de 23 hours ago 1.41GB 2026-01-09 01:20:41.989018 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 b28c5dc0e2d3 23 hours ago 1.72GB 2026-01-09 01:20:41.989026 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 d16fdbb9f4bf 23 hours ago 1.05GB 2026-01-09 01:20:41.989035 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 0bc9609f29b0 23 hours ago 995MB 2026-01-09 01:20:41.989049 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 0d2b531879e6 23 hours ago 981MB 2026-01-09 01:20:41.989057 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 7f070537180f 23 hours ago 1.22GB 2026-01-09 01:20:41.989065 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7f0621155ca4 23 hours ago 1.22GB 2026-01-09 01:20:41.989072 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 5507ac54c2b1 23 hours ago 1.22GB 2026-01-09 01:20:41.989080 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c75c743eb2df 23 hours ago 1.37GB 2026-01-09 01:20:41.989088 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0067c8881dd8 23 hours ago 1.25GB 2026-01-09 01:20:41.989096 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 7027f8816a0c 23 hours ago 1.13GB 2026-01-09 01:20:41.989104 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 ad394dfe7a70 23 hours ago 994MB 2026-01-09 01:20:41.989112 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 c7e192042936 23 hours ago 990MB 2026-01-09 01:20:41.989119 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 15288884cdd7 23 hours ago 989MB 2026-01-09 01:20:41.989127 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 2427389bd146 23 hours ago 990MB 2026-01-09 01:20:41.989135 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 8bddd581879b 23 hours ago 994MB 2026-01-09 01:20:41.989143 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 71af3b46df48 23 hours ago 990MB 2026-01-09 01:20:41.989151 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 0d050572749e 23 hours ago 979MB 2026-01-09 01:20:41.989158 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 95d36798e8da 23 hours ago 979MB 2026-01-09 01:20:41.989166 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 b1d21966dc30 23 hours ago 979MB 2026-01-09 01:20:41.989174 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 3b270fb3f8c4 23 hours ago 978MB 2026-01-09 01:20:41.989182 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d25262980980 23 hours ago 1.09GB 2026-01-09 01:20:41.989190 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 6348c0f155de 23 hours ago 1.04GB 2026-01-09 01:20:41.989227 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 f69307412ca4 23 hours ago 1.05GB 2026-01-09 01:20:41.989236 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 180f00e90820 23 hours ago 982MB 2026-01-09 01:20:41.989244 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 23636d34f61e 23 hours ago 981MB 2026-01-09 01:20:41.989252 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 0282c2332475 23 hours ago 845MB 2026-01-09 01:20:41.989260 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 6bb1789b7973 23 hours ago 845MB 2026-01-09 01:20:41.989268 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1ed9ed01a9fc 23 hours ago 845MB 2026-01-09 01:20:41.989276 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 5e870de9c1cd 23 hours ago 845MB 2026-01-09 01:20:42.331864 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-09 01:20:42.333036 | orchestrator | ++ semver latest 5.0.0 2026-01-09 01:20:42.387113 | orchestrator | 2026-01-09 01:20:42.387253 | orchestrator | ## Containers @ testbed-node-1 2026-01-09 01:20:42.387266 | orchestrator | 2026-01-09 01:20:42.387273 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-09 01:20:42.387279 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-09 01:20:42.387286 | orchestrator | + echo 2026-01-09 01:20:42.387313 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-01-09 01:20:42.387318 | orchestrator | + echo 2026-01-09 01:20:42.387322 | orchestrator | + osism container testbed-node-1 ps 2026-01-09 01:20:44.898835 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-09 01:20:44.898936 | orchestrator | 96b70c6f7225 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_worker 2026-01-09 01:20:44.898945 | orchestrator | ea628e640dfe registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2026-01-09 01:20:44.898952 | orchestrator | bbf693519b55 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2026-01-09 01:20:44.898958 | orchestrator | 9cf38b1c4e02 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-01-09 01:20:44.898964 | orchestrator | 2d9408a35356 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-09 01:20:44.898970 | orchestrator | 5d78d8feb8dc registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-01-09 01:20:44.898975 | orchestrator | 0da24e4ee61f registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-01-09 01:20:44.898981 | orchestrator | 47a808bb9c94 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-01-09 01:20:44.898990 | orchestrator | 262cf272e701 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-01-09 01:20:44.898996 | orchestrator | cd8b2c647335 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-01-09 01:20:44.899002 | orchestrator | b62912e5d1f9 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_backup 2026-01-09 01:20:44.899007 | orchestrator | ecfc7b270a70 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) cinder_volume 2026-01-09 01:20:44.899013 | orchestrator | cdc639afac2a registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2026-01-09 01:20:44.899019 | orchestrator | f73758a72ae7 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2026-01-09 01:20:44.899039 | orchestrator | 0d47d66b2aeb registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-01-09 01:20:44.899045 | orchestrator | 810cc3ef09b9 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-01-09 01:20:44.899051 | orchestrator | 2e1449653500 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-01-09 01:20:44.899057 | orchestrator | e7463f3bb900 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-01-09 01:20:44.899080 | orchestrator | 50cbdefe8176 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-01-09 01:20:44.899086 | orchestrator | a997419d35c0 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-01-09 01:20:44.899092 | orchestrator | fa236f4cf413 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2026-01-09 01:20:44.899108 | orchestrator | bad37facd510 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_api 2026-01-09 01:20:44.899114 | orchestrator | b6ba37374bee registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2026-01-09 01:20:44.899120 | orchestrator | 5e260c622455 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-01-09 01:20:44.899125 | orchestrator | d35a65f320d6 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-01-09 01:20:44.899131 | orchestrator | 0e1b6d0b4437 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-01-09 01:20:44.899136 | orchestrator | b4c9123a71a0 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2026-01-09 01:20:44.899142 | orchestrator | 981d0b583d75 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2026-01-09 01:20:44.899147 | orchestrator | 2f9f19f0106e registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2026-01-09 01:20:44.899153 | orchestrator | 81677f29210e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-01-09 01:20:44.899158 | orchestrator | b77919c215d2 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-01-09 01:20:44.899164 | orchestrator | b12e4119dfea registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2026-01-09 01:20:44.899169 | orchestrator | 985496c34033 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-01-09 01:20:44.899175 | orchestrator | 20b3cf7b69b0 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-01-09 01:20:44.899180 | orchestrator | ecac18b71194 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-01-09 01:20:44.899186 | orchestrator | 73f2fe53e18a registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-01-09 01:20:44.899191 | orchestrator | 84a6494decc6 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-01-09 01:20:44.899220 | orchestrator | ca2f6ff45fca registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-01-09 01:20:44.899231 | orchestrator | 3bcabd5602ba registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-01-09 01:20:44.899237 | orchestrator | 8b5b6fc860dd registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2026-01-09 01:20:44.899242 | orchestrator | ddcf4206edb0 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-01-09 01:20:44.899248 | orchestrator | 81c7f84e769d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2026-01-09 01:20:44.899253 | orchestrator | 33a038609db0 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2026-01-09 01:20:44.899259 | orchestrator | b7185be84114 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-01-09 01:20:44.899268 | orchestrator | d21db66b0c0f registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2026-01-09 01:20:44.899273 | orchestrator | e168862e5a34 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_northd 2026-01-09 01:20:44.899279 | orchestrator | 8bb77220c4d5 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_sb_db 2026-01-09 01:20:44.899284 | orchestrator | 04a4a842f1a1 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_nb_db 2026-01-09 01:20:44.899290 | orchestrator | 59a4f2509739 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-1 2026-01-09 01:20:44.899295 | orchestrator | 1ccb8a94b1c5 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-01-09 01:20:44.899301 | orchestrator | 006241154e56 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2026-01-09 01:20:44.899306 | orchestrator | b1b412d8efd8 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-01-09 01:20:44.899311 | orchestrator | d91f5ab127e0 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2026-01-09 01:20:44.899317 | orchestrator | a500b59bc817 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2026-01-09 01:20:44.899323 | orchestrator | dace49393bc1 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-01-09 01:20:44.899328 | orchestrator | 722cdaa29892 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-01-09 01:20:44.899334 | orchestrator | 4d608fd77829 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-01-09 01:20:44.899339 | orchestrator | b3d8ae866619 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2026-01-09 01:20:44.899350 | orchestrator | caf798fedd73 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-01-09 01:20:45.242083 | orchestrator | 2026-01-09 01:20:45.242180 | orchestrator | ## Images @ testbed-node-1 2026-01-09 01:20:45.242191 | orchestrator | 2026-01-09 01:20:45.242248 | orchestrator | + echo 2026-01-09 01:20:45.242258 | orchestrator | + echo '## Images @ testbed-node-1' 2026-01-09 01:20:45.242266 | orchestrator | + echo 2026-01-09 01:20:45.242273 | orchestrator | + osism container testbed-node-1 images 2026-01-09 01:20:47.754584 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-09 01:20:47.754676 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 024dffda1c70 22 hours ago 1.27GB 2026-01-09 01:20:47.754682 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b26609acb4a5 23 hours ago 1.56GB 2026-01-09 01:20:47.754687 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 d47de75b0f3e 23 hours ago 1.53GB 2026-01-09 01:20:47.754691 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 a0a2854fc448 23 hours ago 675MB 2026-01-09 01:20:47.754695 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 831a162f22d5 23 hours ago 279MB 2026-01-09 01:20:47.754699 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 1259e039edf3 23 hours ago 1.02GB 2026-01-09 01:20:47.754718 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f527a3280384 23 hours ago 328MB 2026-01-09 01:20:47.754722 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 adcbac3215aa 23 hours ago 417MB 2026-01-09 01:20:47.754725 | orchestrator | registry.osism.tech/kolla/cron 2024.2 8926fd73257f 23 hours ago 271MB 2026-01-09 01:20:47.754729 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 9f0676474158 23 hours ago 271MB 2026-01-09 01:20:47.754736 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2fa0daf17a55 23 hours ago 584MB 2026-01-09 01:20:47.754740 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 c5d59be14713 23 hours ago 282MB 2026-01-09 01:20:47.754744 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 9500faa9f5e4 23 hours ago 1.15GB 2026-01-09 01:20:47.754748 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 7bfd72b2e931 23 hours ago 304MB 2026-01-09 01:20:47.754752 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 cb81d3c1a304 23 hours ago 297MB 2026-01-09 01:20:47.754755 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 f4f88ddcf361 23 hours ago 306MB 2026-01-09 01:20:47.754759 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9e9b2a749855 23 hours ago 311MB 2026-01-09 01:20:47.754763 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 da8326fbdfa0 23 hours ago 362MB 2026-01-09 01:20:47.754767 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 9aa8aff9caf2 23 hours ago 457MB 2026-01-09 01:20:47.754770 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 e501b53579f2 23 hours ago 284MB 2026-01-09 01:20:47.754774 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 4719af68f793 23 hours ago 284MB 2026-01-09 01:20:47.754778 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d7b0f4dd4330 23 hours ago 278MB 2026-01-09 01:20:47.754781 | orchestrator | registry.osism.tech/kolla/redis 2024.2 ed6f07ffbe42 23 hours ago 278MB 2026-01-09 01:20:47.754785 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 5a98bfb47b80 23 hours ago 996MB 2026-01-09 01:20:47.754805 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 dfafc9f7afb1 23 hours ago 995MB 2026-01-09 01:20:47.754809 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2b3e3a0e91f0 23 hours ago 996MB 2026-01-09 01:20:47.754812 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 0ad56ed5fd77 23 hours ago 1.17GB 2026-01-09 01:20:47.754816 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 06ef6d50d22e 23 hours ago 1.03GB 2026-01-09 01:20:47.754820 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 71d9786b448a 23 hours ago 1.06GB 2026-01-09 01:20:47.754824 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 350ee0f05475 23 hours ago 1.03GB 2026-01-09 01:20:47.754828 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 8c434086e541 23 hours ago 1.03GB 2026-01-09 01:20:47.754832 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 84ca48dc448a 23 hours ago 1.06GB 2026-01-09 01:20:47.754836 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 7af4915d7d55 23 hours ago 1.1GB 2026-01-09 01:20:47.754839 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 fc2a752737ed 23 hours ago 1.41GB 2026-01-09 01:20:47.754843 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 0e868106e8a0 23 hours ago 1.42GB 2026-01-09 01:20:47.754847 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 f554233359de 23 hours ago 1.41GB 2026-01-09 01:20:47.754861 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 b28c5dc0e2d3 23 hours ago 1.72GB 2026-01-09 01:20:47.754866 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 0d2b531879e6 23 hours ago 981MB 2026-01-09 01:20:47.754869 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 7f070537180f 23 hours ago 1.22GB 2026-01-09 01:20:47.754873 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7f0621155ca4 23 hours ago 1.22GB 2026-01-09 01:20:47.754877 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 5507ac54c2b1 23 hours ago 1.22GB 2026-01-09 01:20:47.754880 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c75c743eb2df 23 hours ago 1.37GB 2026-01-09 01:20:47.754884 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0067c8881dd8 23 hours ago 1.25GB 2026-01-09 01:20:47.754888 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 7027f8816a0c 23 hours ago 1.13GB 2026-01-09 01:20:47.754891 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 ad394dfe7a70 23 hours ago 994MB 2026-01-09 01:20:47.754895 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 c7e192042936 23 hours ago 990MB 2026-01-09 01:20:47.754899 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 15288884cdd7 23 hours ago 989MB 2026-01-09 01:20:47.754902 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 2427389bd146 23 hours ago 990MB 2026-01-09 01:20:47.754906 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 8bddd581879b 23 hours ago 994MB 2026-01-09 01:20:47.754910 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 71af3b46df48 23 hours ago 990MB 2026-01-09 01:20:47.754913 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d25262980980 23 hours ago 1.09GB 2026-01-09 01:20:47.754917 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 6348c0f155de 23 hours ago 1.04GB 2026-01-09 01:20:47.754921 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 f69307412ca4 23 hours ago 1.05GB 2026-01-09 01:20:47.754924 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 0282c2332475 23 hours ago 845MB 2026-01-09 01:20:47.754932 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 6bb1789b7973 23 hours ago 845MB 2026-01-09 01:20:47.754939 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1ed9ed01a9fc 23 hours ago 845MB 2026-01-09 01:20:47.754943 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 5e870de9c1cd 23 hours ago 845MB 2026-01-09 01:20:48.091043 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-09 01:20:48.091223 | orchestrator | ++ semver latest 5.0.0 2026-01-09 01:20:48.132852 | orchestrator | 2026-01-09 01:20:48.132946 | orchestrator | ## Containers @ testbed-node-2 2026-01-09 01:20:48.132953 | orchestrator | 2026-01-09 01:20:48.132958 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-09 01:20:48.132962 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-09 01:20:48.132966 | orchestrator | + echo 2026-01-09 01:20:48.132971 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-01-09 01:20:48.132977 | orchestrator | + echo 2026-01-09 01:20:48.132981 | orchestrator | + osism container testbed-node-2 ps 2026-01-09 01:20:50.570071 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-09 01:20:50.570175 | orchestrator | 55518a548fae registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_worker 2026-01-09 01:20:50.570269 | orchestrator | 19e4b1a2a4d5 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2026-01-09 01:20:50.570279 | orchestrator | e7b497432661 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2026-01-09 01:20:50.570284 | orchestrator | 4f3bc18c0434 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-01-09 01:20:50.570290 | orchestrator | 50618fc33705 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-09 01:20:50.570297 | orchestrator | cf59fe4c3934 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-01-09 01:20:50.570302 | orchestrator | 74585af19636 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-01-09 01:20:50.570309 | orchestrator | b328fd896dab registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-01-09 01:20:50.570316 | orchestrator | 69cb8006f615 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-01-09 01:20:50.570322 | orchestrator | 95ccf25c152f registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-01-09 01:20:50.570328 | orchestrator | 050c734cba56 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_backup 2026-01-09 01:20:50.570335 | orchestrator | d490efedd0b4 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-01-09 01:20:50.570342 | orchestrator | a7321ad0402b registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2026-01-09 01:20:50.570350 | orchestrator | 2e64fd2f2e9f registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2026-01-09 01:20:50.570377 | orchestrator | 30ea502ecd92 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-01-09 01:20:50.570382 | orchestrator | e9a4a467ae8d registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-01-09 01:20:50.570387 | orchestrator | 07462e2c8edf registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-01-09 01:20:50.570392 | orchestrator | 26da0ff3994d registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-01-09 01:20:50.570397 | orchestrator | 523709d23792 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-01-09 01:20:50.570401 | orchestrator | 25f4cf956705 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-01-09 01:20:50.570405 | orchestrator | 60295961f5cd registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2026-01-09 01:20:50.570425 | orchestrator | 72461c73ef84 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_api 2026-01-09 01:20:50.570430 | orchestrator | 4ba903991c94 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2026-01-09 01:20:50.570433 | orchestrator | 3ae461082c3d registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-01-09 01:20:50.570437 | orchestrator | f665b61ff8eb registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-01-09 01:20:50.570441 | orchestrator | 4edb44d9d0ed registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-01-09 01:20:50.570445 | orchestrator | 9e7d31025716 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2026-01-09 01:20:50.570448 | orchestrator | dc351a473fda registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2026-01-09 01:20:50.570452 | orchestrator | 05955f5ced41 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2026-01-09 01:20:50.570469 | orchestrator | 32435e5cb52d registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-01-09 01:20:50.570473 | orchestrator | d8edf98b5af5 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2026-01-09 01:20:50.570477 | orchestrator | 34970c8d3d7a registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-01-09 01:20:50.570480 | orchestrator | faf0d6ee953c registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-01-09 01:20:50.570490 | orchestrator | 2674d564aa2b registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-01-09 01:20:50.570494 | orchestrator | 884291a43cc1 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-01-09 01:20:50.570498 | orchestrator | 214a99556b65 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-01-09 01:20:50.570502 | orchestrator | 2b4f2187aaad registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-01-09 01:20:50.570509 | orchestrator | f66d793643b4 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-01-09 01:20:50.570513 | orchestrator | 972d33c7a75e registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 23 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-01-09 01:20:50.570516 | orchestrator | 82c7f8cef0ed registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2026-01-09 01:20:50.570520 | orchestrator | 8b220953d550 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-01-09 01:20:50.570524 | orchestrator | 724fcaf07019 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-2 2026-01-09 01:20:50.570530 | orchestrator | 8c138fb568af registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2026-01-09 01:20:50.570536 | orchestrator | fa465e6f9219 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-01-09 01:20:50.570546 | orchestrator | 4895b1e1e6e5 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2026-01-09 01:20:50.570553 | orchestrator | 8f22eaabefdf registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_northd 2026-01-09 01:20:50.570560 | orchestrator | 9a1c979164c3 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_sb_db 2026-01-09 01:20:50.570566 | orchestrator | 1d8be4e544a8 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_nb_db 2026-01-09 01:20:50.570572 | orchestrator | 7c764fc480a2 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-01-09 01:20:50.570578 | orchestrator | c07f2d8f0787 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-2 2026-01-09 01:20:50.570582 | orchestrator | 39d165d2e629 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-01-09 01:20:50.570585 | orchestrator | 72b9b771d5e7 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-01-09 01:20:50.570589 | orchestrator | 35a392e50c17 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2026-01-09 01:20:50.570597 | orchestrator | d85fc5b36b7f registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2026-01-09 01:20:50.570601 | orchestrator | 952b4b737c84 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-01-09 01:20:50.570605 | orchestrator | 2db69c90f3f2 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-01-09 01:20:50.570608 | orchestrator | ef54de8ae86d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-01-09 01:20:50.570612 | orchestrator | 576903777deb registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2026-01-09 01:20:50.570616 | orchestrator | 28b619dc6eb9 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-01-09 01:20:50.903072 | orchestrator | 2026-01-09 01:20:50.903162 | orchestrator | ## Images @ testbed-node-2 2026-01-09 01:20:50.903172 | orchestrator | 2026-01-09 01:20:50.903181 | orchestrator | + echo 2026-01-09 01:20:50.903188 | orchestrator | + echo '## Images @ testbed-node-2' 2026-01-09 01:20:50.903239 | orchestrator | + echo 2026-01-09 01:20:50.903246 | orchestrator | + osism container testbed-node-2 images 2026-01-09 01:20:53.376739 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-09 01:20:53.376826 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 024dffda1c70 22 hours ago 1.27GB 2026-01-09 01:20:53.376834 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b26609acb4a5 23 hours ago 1.56GB 2026-01-09 01:20:53.376840 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 d47de75b0f3e 23 hours ago 1.53GB 2026-01-09 01:20:53.376845 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 a0a2854fc448 23 hours ago 675MB 2026-01-09 01:20:53.376850 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 831a162f22d5 23 hours ago 279MB 2026-01-09 01:20:53.376855 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 1259e039edf3 23 hours ago 1.02GB 2026-01-09 01:20:53.376860 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f527a3280384 23 hours ago 328MB 2026-01-09 01:20:53.376865 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 adcbac3215aa 23 hours ago 417MB 2026-01-09 01:20:53.376869 | orchestrator | registry.osism.tech/kolla/cron 2024.2 8926fd73257f 23 hours ago 271MB 2026-01-09 01:20:53.376874 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 9f0676474158 23 hours ago 271MB 2026-01-09 01:20:53.376879 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 2fa0daf17a55 23 hours ago 584MB 2026-01-09 01:20:53.376898 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 c5d59be14713 23 hours ago 282MB 2026-01-09 01:20:53.376903 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 9500faa9f5e4 23 hours ago 1.15GB 2026-01-09 01:20:53.376907 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 7bfd72b2e931 23 hours ago 304MB 2026-01-09 01:20:53.376912 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 cb81d3c1a304 23 hours ago 297MB 2026-01-09 01:20:53.376917 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 f4f88ddcf361 23 hours ago 306MB 2026-01-09 01:20:53.376921 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9e9b2a749855 23 hours ago 311MB 2026-01-09 01:20:53.376926 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 da8326fbdfa0 23 hours ago 362MB 2026-01-09 01:20:53.376947 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 9aa8aff9caf2 23 hours ago 457MB 2026-01-09 01:20:53.376953 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 4719af68f793 23 hours ago 284MB 2026-01-09 01:20:53.376961 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 e501b53579f2 23 hours ago 284MB 2026-01-09 01:20:53.376971 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d7b0f4dd4330 23 hours ago 278MB 2026-01-09 01:20:53.376981 | orchestrator | registry.osism.tech/kolla/redis 2024.2 ed6f07ffbe42 23 hours ago 278MB 2026-01-09 01:20:53.376990 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 5a98bfb47b80 23 hours ago 996MB 2026-01-09 01:20:53.376997 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 dfafc9f7afb1 23 hours ago 995MB 2026-01-09 01:20:53.377004 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2b3e3a0e91f0 23 hours ago 996MB 2026-01-09 01:20:53.377011 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 0ad56ed5fd77 23 hours ago 1.17GB 2026-01-09 01:20:53.377018 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 06ef6d50d22e 23 hours ago 1.03GB 2026-01-09 01:20:53.377026 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 71d9786b448a 23 hours ago 1.06GB 2026-01-09 01:20:53.377033 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 350ee0f05475 23 hours ago 1.03GB 2026-01-09 01:20:53.377040 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 8c434086e541 23 hours ago 1.03GB 2026-01-09 01:20:53.377048 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 84ca48dc448a 23 hours ago 1.06GB 2026-01-09 01:20:53.377056 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 7af4915d7d55 23 hours ago 1.1GB 2026-01-09 01:20:53.377063 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 fc2a752737ed 23 hours ago 1.41GB 2026-01-09 01:20:53.377071 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 0e868106e8a0 23 hours ago 1.42GB 2026-01-09 01:20:53.377078 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 f554233359de 23 hours ago 1.41GB 2026-01-09 01:20:53.377104 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 b28c5dc0e2d3 23 hours ago 1.72GB 2026-01-09 01:20:53.377110 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 0d2b531879e6 23 hours ago 981MB 2026-01-09 01:20:53.377120 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 7f070537180f 23 hours ago 1.22GB 2026-01-09 01:20:53.377124 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7f0621155ca4 23 hours ago 1.22GB 2026-01-09 01:20:53.377129 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 5507ac54c2b1 23 hours ago 1.22GB 2026-01-09 01:20:53.377133 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c75c743eb2df 23 hours ago 1.37GB 2026-01-09 01:20:53.377138 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0067c8881dd8 23 hours ago 1.25GB 2026-01-09 01:20:53.377142 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 7027f8816a0c 23 hours ago 1.13GB 2026-01-09 01:20:53.377147 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 ad394dfe7a70 23 hours ago 994MB 2026-01-09 01:20:53.377151 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 c7e192042936 23 hours ago 990MB 2026-01-09 01:20:53.377156 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 15288884cdd7 23 hours ago 989MB 2026-01-09 01:20:53.377166 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 2427389bd146 23 hours ago 990MB 2026-01-09 01:20:53.377171 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 8bddd581879b 23 hours ago 994MB 2026-01-09 01:20:53.377175 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 71af3b46df48 23 hours ago 990MB 2026-01-09 01:20:53.377180 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d25262980980 23 hours ago 1.09GB 2026-01-09 01:20:53.377184 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 6348c0f155de 23 hours ago 1.04GB 2026-01-09 01:20:53.377189 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 f69307412ca4 23 hours ago 1.05GB 2026-01-09 01:20:53.377256 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 0282c2332475 23 hours ago 845MB 2026-01-09 01:20:53.377263 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 6bb1789b7973 23 hours ago 845MB 2026-01-09 01:20:53.377267 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1ed9ed01a9fc 23 hours ago 845MB 2026-01-09 01:20:53.377272 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 5e870de9c1cd 23 hours ago 845MB 2026-01-09 01:20:53.714801 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-01-09 01:20:53.721810 | orchestrator | + set -e 2026-01-09 01:20:53.721896 | orchestrator | + source /opt/manager-vars.sh 2026-01-09 01:20:53.722747 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-09 01:20:53.722796 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-09 01:20:53.722806 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-09 01:20:53.722813 | orchestrator | ++ CEPH_VERSION=reef 2026-01-09 01:20:53.722819 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-09 01:20:53.723511 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-09 01:20:53.723533 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-09 01:20:53.723540 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-09 01:20:53.723546 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-09 01:20:53.723552 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-09 01:20:53.723558 | orchestrator | ++ export ARA=false 2026-01-09 01:20:53.723565 | orchestrator | ++ ARA=false 2026-01-09 01:20:53.723571 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-09 01:20:53.723577 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-09 01:20:53.723585 | orchestrator | ++ export TEMPEST=true 2026-01-09 01:20:53.723591 | orchestrator | ++ TEMPEST=true 2026-01-09 01:20:53.723598 | orchestrator | ++ export IS_ZUUL=true 2026-01-09 01:20:53.723605 | orchestrator | ++ IS_ZUUL=true 2026-01-09 01:20:53.723612 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.67 2026-01-09 01:20:53.723618 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.67 2026-01-09 01:20:53.723625 | orchestrator | ++ export EXTERNAL_API=false 2026-01-09 01:20:53.723632 | orchestrator | ++ EXTERNAL_API=false 2026-01-09 01:20:53.723638 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-09 01:20:53.723645 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-09 01:20:53.723652 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-09 01:20:53.723658 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-09 01:20:53.723665 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-09 01:20:53.723672 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-09 01:20:53.723678 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-09 01:20:53.723685 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-01-09 01:20:53.731920 | orchestrator | + set -e 2026-01-09 01:20:53.732008 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-09 01:20:53.732017 | orchestrator | ++ export INTERACTIVE=false 2026-01-09 01:20:53.732026 | orchestrator | ++ INTERACTIVE=false 2026-01-09 01:20:53.732033 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-09 01:20:53.732040 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-09 01:20:53.732048 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-09 01:20:53.733117 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-09 01:20:53.739636 | orchestrator | 2026-01-09 01:20:53.739718 | orchestrator | # Ceph status 2026-01-09 01:20:53.739726 | orchestrator | 2026-01-09 01:20:53.739734 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-09 01:20:53.739741 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-09 01:20:53.739747 | orchestrator | + echo 2026-01-09 01:20:53.739792 | orchestrator | + echo '# Ceph status' 2026-01-09 01:20:53.739799 | orchestrator | + echo 2026-01-09 01:20:53.739806 | orchestrator | + ceph -s 2026-01-09 01:20:54.372934 | orchestrator | cluster: 2026-01-09 01:20:54.373016 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-01-09 01:20:54.373027 | orchestrator | health: HEALTH_OK 2026-01-09 01:20:54.373034 | orchestrator | 2026-01-09 01:20:54.373041 | orchestrator | services: 2026-01-09 01:20:54.373048 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 30m) 2026-01-09 01:20:54.373057 | orchestrator | mgr: testbed-node-1(active, since 16m), standbys: testbed-node-2, testbed-node-0 2026-01-09 01:20:54.373065 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-01-09 01:20:54.373073 | orchestrator | osd: 6 osds: 6 up (since 26m), 6 in (since 26m) 2026-01-09 01:20:54.373081 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-01-09 01:20:54.373086 | orchestrator | 2026-01-09 01:20:54.373090 | orchestrator | data: 2026-01-09 01:20:54.373094 | orchestrator | volumes: 1/1 healthy 2026-01-09 01:20:54.373098 | orchestrator | pools: 14 pools, 401 pgs 2026-01-09 01:20:54.373102 | orchestrator | objects: 555 objects, 2.2 GiB 2026-01-09 01:20:54.373107 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-01-09 01:20:54.373111 | orchestrator | pgs: 401 active+clean 2026-01-09 01:20:54.373115 | orchestrator | 2026-01-09 01:20:54.420184 | orchestrator | 2026-01-09 01:20:54.420317 | orchestrator | # Ceph versions 2026-01-09 01:20:54.420323 | orchestrator | 2026-01-09 01:20:54.420338 | orchestrator | + echo 2026-01-09 01:20:54.420348 | orchestrator | + echo '# Ceph versions' 2026-01-09 01:20:54.420359 | orchestrator | + echo 2026-01-09 01:20:54.420363 | orchestrator | + ceph versions 2026-01-09 01:20:55.014793 | orchestrator | { 2026-01-09 01:20:55.014896 | orchestrator | "mon": { 2026-01-09 01:20:55.014909 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-09 01:20:55.014918 | orchestrator | }, 2026-01-09 01:20:55.014925 | orchestrator | "mgr": { 2026-01-09 01:20:55.014932 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-09 01:20:55.014938 | orchestrator | }, 2026-01-09 01:20:55.014945 | orchestrator | "osd": { 2026-01-09 01:20:55.014952 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-01-09 01:20:55.014960 | orchestrator | }, 2026-01-09 01:20:55.014967 | orchestrator | "mds": { 2026-01-09 01:20:55.014974 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-09 01:20:55.014980 | orchestrator | }, 2026-01-09 01:20:55.014984 | orchestrator | "rgw": { 2026-01-09 01:20:55.015009 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-09 01:20:55.015014 | orchestrator | }, 2026-01-09 01:20:55.015018 | orchestrator | "overall": { 2026-01-09 01:20:55.015023 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-01-09 01:20:55.015028 | orchestrator | } 2026-01-09 01:20:55.015032 | orchestrator | } 2026-01-09 01:20:55.070948 | orchestrator | 2026-01-09 01:20:55.071059 | orchestrator | # Ceph OSD tree 2026-01-09 01:20:55.071072 | orchestrator | 2026-01-09 01:20:55.071079 | orchestrator | + echo 2026-01-09 01:20:55.071086 | orchestrator | + echo '# Ceph OSD tree' 2026-01-09 01:20:55.071094 | orchestrator | + echo 2026-01-09 01:20:55.071100 | orchestrator | + ceph osd df tree 2026-01-09 01:20:55.591334 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-01-09 01:20:55.591438 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-01-09 01:20:55.591448 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-01-09 01:20:55.591455 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 852 MiB 779 MiB 1 KiB 74 MiB 19 GiB 4.17 0.70 174 up osd.0 2026-01-09 01:20:55.591461 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 70 MiB 18 GiB 7.67 1.30 218 up osd.3 2026-01-09 01:20:55.591468 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-01-09 01:20:55.591476 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.84 1.16 209 up osd.1 2026-01-09 01:20:55.591505 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1020 MiB 947 MiB 1 KiB 74 MiB 19 GiB 4.99 0.84 181 up osd.5 2026-01-09 01:20:55.591512 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-01-09 01:20:55.591520 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.93 1.17 198 up osd.2 2026-01-09 01:20:55.591529 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1004 MiB 931 MiB 1 KiB 74 MiB 19 GiB 4.91 0.83 190 up osd.4 2026-01-09 01:20:55.591535 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-01-09 01:20:55.591542 | orchestrator | MIN/MAX VAR: 0.70/1.30 STDDEV: 1.28 2026-01-09 01:20:55.634367 | orchestrator | 2026-01-09 01:20:55.634447 | orchestrator | # Ceph monitor status 2026-01-09 01:20:55.634454 | orchestrator | 2026-01-09 01:20:55.634460 | orchestrator | + echo 2026-01-09 01:20:55.634464 | orchestrator | + echo '# Ceph monitor status' 2026-01-09 01:20:55.634469 | orchestrator | + echo 2026-01-09 01:20:55.634473 | orchestrator | + ceph mon stat 2026-01-09 01:20:56.266057 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-01-09 01:20:56.319263 | orchestrator | 2026-01-09 01:20:56.319344 | orchestrator | # Ceph quorum status 2026-01-09 01:20:56.319351 | orchestrator | 2026-01-09 01:20:56.319356 | orchestrator | + echo 2026-01-09 01:20:56.319361 | orchestrator | + echo '# Ceph quorum status' 2026-01-09 01:20:56.319366 | orchestrator | + echo 2026-01-09 01:20:56.319467 | orchestrator | + ceph quorum_status 2026-01-09 01:20:56.319912 | orchestrator | + jq 2026-01-09 01:20:56.960963 | orchestrator | { 2026-01-09 01:20:56.961181 | orchestrator | "election_epoch": 8, 2026-01-09 01:20:56.961330 | orchestrator | "quorum": [ 2026-01-09 01:20:56.961350 | orchestrator | 0, 2026-01-09 01:20:56.961365 | orchestrator | 1, 2026-01-09 01:20:56.961380 | orchestrator | 2 2026-01-09 01:20:56.961396 | orchestrator | ], 2026-01-09 01:20:56.961412 | orchestrator | "quorum_names": [ 2026-01-09 01:20:56.961428 | orchestrator | "testbed-node-0", 2026-01-09 01:20:56.961444 | orchestrator | "testbed-node-1", 2026-01-09 01:20:56.961460 | orchestrator | "testbed-node-2" 2026-01-09 01:20:56.961476 | orchestrator | ], 2026-01-09 01:20:56.961491 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-01-09 01:20:56.961508 | orchestrator | "quorum_age": 1804, 2026-01-09 01:20:56.961525 | orchestrator | "features": { 2026-01-09 01:20:56.961543 | orchestrator | "quorum_con": "4540138322906710015", 2026-01-09 01:20:56.961559 | orchestrator | "quorum_mon": [ 2026-01-09 01:20:56.961574 | orchestrator | "kraken", 2026-01-09 01:20:56.961590 | orchestrator | "luminous", 2026-01-09 01:20:56.961606 | orchestrator | "mimic", 2026-01-09 01:20:56.961621 | orchestrator | "osdmap-prune", 2026-01-09 01:20:56.961636 | orchestrator | "nautilus", 2026-01-09 01:20:56.961652 | orchestrator | "octopus", 2026-01-09 01:20:56.961667 | orchestrator | "pacific", 2026-01-09 01:20:56.961683 | orchestrator | "elector-pinging", 2026-01-09 01:20:56.961700 | orchestrator | "quincy", 2026-01-09 01:20:56.961716 | orchestrator | "reef" 2026-01-09 01:20:56.961731 | orchestrator | ] 2026-01-09 01:20:56.961743 | orchestrator | }, 2026-01-09 01:20:56.961759 | orchestrator | "monmap": { 2026-01-09 01:20:56.961773 | orchestrator | "epoch": 1, 2026-01-09 01:20:56.961789 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-01-09 01:20:56.961806 | orchestrator | "modified": "2026-01-09T00:50:33.334314Z", 2026-01-09 01:20:56.961821 | orchestrator | "created": "2026-01-09T00:50:33.334314Z", 2026-01-09 01:20:56.961836 | orchestrator | "min_mon_release": 18, 2026-01-09 01:20:56.961851 | orchestrator | "min_mon_release_name": "reef", 2026-01-09 01:20:56.961867 | orchestrator | "election_strategy": 1, 2026-01-09 01:20:56.961882 | orchestrator | "disallowed_leaders: ": "", 2026-01-09 01:20:56.961899 | orchestrator | "stretch_mode": false, 2026-01-09 01:20:56.961908 | orchestrator | "tiebreaker_mon": "", 2026-01-09 01:20:56.961917 | orchestrator | "removed_ranks: ": "", 2026-01-09 01:20:56.961926 | orchestrator | "features": { 2026-01-09 01:20:56.961936 | orchestrator | "persistent": [ 2026-01-09 01:20:56.961974 | orchestrator | "kraken", 2026-01-09 01:20:56.961983 | orchestrator | "luminous", 2026-01-09 01:20:56.961992 | orchestrator | "mimic", 2026-01-09 01:20:56.962001 | orchestrator | "osdmap-prune", 2026-01-09 01:20:56.962010 | orchestrator | "nautilus", 2026-01-09 01:20:56.962098 | orchestrator | "octopus", 2026-01-09 01:20:56.962112 | orchestrator | "pacific", 2026-01-09 01:20:56.962121 | orchestrator | "elector-pinging", 2026-01-09 01:20:56.962130 | orchestrator | "quincy", 2026-01-09 01:20:56.962139 | orchestrator | "reef" 2026-01-09 01:20:56.962148 | orchestrator | ], 2026-01-09 01:20:56.962156 | orchestrator | "optional": [] 2026-01-09 01:20:56.962165 | orchestrator | }, 2026-01-09 01:20:56.962174 | orchestrator | "mons": [ 2026-01-09 01:20:56.962183 | orchestrator | { 2026-01-09 01:20:56.962191 | orchestrator | "rank": 0, 2026-01-09 01:20:56.962248 | orchestrator | "name": "testbed-node-0", 2026-01-09 01:20:56.962265 | orchestrator | "public_addrs": { 2026-01-09 01:20:56.962281 | orchestrator | "addrvec": [ 2026-01-09 01:20:56.962296 | orchestrator | { 2026-01-09 01:20:56.962311 | orchestrator | "type": "v2", 2026-01-09 01:20:56.962320 | orchestrator | "addr": "192.168.16.10:3300", 2026-01-09 01:20:56.962329 | orchestrator | "nonce": 0 2026-01-09 01:20:56.962338 | orchestrator | }, 2026-01-09 01:20:56.962347 | orchestrator | { 2026-01-09 01:20:56.962355 | orchestrator | "type": "v1", 2026-01-09 01:20:56.962364 | orchestrator | "addr": "192.168.16.10:6789", 2026-01-09 01:20:56.962372 | orchestrator | "nonce": 0 2026-01-09 01:20:56.962381 | orchestrator | } 2026-01-09 01:20:56.962389 | orchestrator | ] 2026-01-09 01:20:56.962398 | orchestrator | }, 2026-01-09 01:20:56.962407 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-01-09 01:20:56.962416 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-01-09 01:20:56.962424 | orchestrator | "priority": 0, 2026-01-09 01:20:56.962433 | orchestrator | "weight": 0, 2026-01-09 01:20:56.962442 | orchestrator | "crush_location": "{}" 2026-01-09 01:20:56.962450 | orchestrator | }, 2026-01-09 01:20:56.962459 | orchestrator | { 2026-01-09 01:20:56.962468 | orchestrator | "rank": 1, 2026-01-09 01:20:56.962476 | orchestrator | "name": "testbed-node-1", 2026-01-09 01:20:56.962485 | orchestrator | "public_addrs": { 2026-01-09 01:20:56.962493 | orchestrator | "addrvec": [ 2026-01-09 01:20:56.962502 | orchestrator | { 2026-01-09 01:20:56.962511 | orchestrator | "type": "v2", 2026-01-09 01:20:56.962519 | orchestrator | "addr": "192.168.16.11:3300", 2026-01-09 01:20:56.962539 | orchestrator | "nonce": 0 2026-01-09 01:20:56.962556 | orchestrator | }, 2026-01-09 01:20:56.962565 | orchestrator | { 2026-01-09 01:20:56.962594 | orchestrator | "type": "v1", 2026-01-09 01:20:56.962603 | orchestrator | "addr": "192.168.16.11:6789", 2026-01-09 01:20:56.962612 | orchestrator | "nonce": 0 2026-01-09 01:20:56.962620 | orchestrator | } 2026-01-09 01:20:56.962629 | orchestrator | ] 2026-01-09 01:20:56.962638 | orchestrator | }, 2026-01-09 01:20:56.962646 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-01-09 01:20:56.962655 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-01-09 01:20:56.962664 | orchestrator | "priority": 0, 2026-01-09 01:20:56.962672 | orchestrator | "weight": 0, 2026-01-09 01:20:56.962681 | orchestrator | "crush_location": "{}" 2026-01-09 01:20:56.962690 | orchestrator | }, 2026-01-09 01:20:56.962698 | orchestrator | { 2026-01-09 01:20:56.962707 | orchestrator | "rank": 2, 2026-01-09 01:20:56.962716 | orchestrator | "name": "testbed-node-2", 2026-01-09 01:20:56.962724 | orchestrator | "public_addrs": { 2026-01-09 01:20:56.962733 | orchestrator | "addrvec": [ 2026-01-09 01:20:56.962742 | orchestrator | { 2026-01-09 01:20:56.962751 | orchestrator | "type": "v2", 2026-01-09 01:20:56.962760 | orchestrator | "addr": "192.168.16.12:3300", 2026-01-09 01:20:56.962768 | orchestrator | "nonce": 0 2026-01-09 01:20:56.962777 | orchestrator | }, 2026-01-09 01:20:56.962785 | orchestrator | { 2026-01-09 01:20:56.962794 | orchestrator | "type": "v1", 2026-01-09 01:20:56.962803 | orchestrator | "addr": "192.168.16.12:6789", 2026-01-09 01:20:56.962811 | orchestrator | "nonce": 0 2026-01-09 01:20:56.962820 | orchestrator | } 2026-01-09 01:20:56.962828 | orchestrator | ] 2026-01-09 01:20:56.962837 | orchestrator | }, 2026-01-09 01:20:56.962846 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-01-09 01:20:56.962855 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-01-09 01:20:56.962873 | orchestrator | "priority": 0, 2026-01-09 01:20:56.962882 | orchestrator | "weight": 0, 2026-01-09 01:20:56.962891 | orchestrator | "crush_location": "{}" 2026-01-09 01:20:56.962900 | orchestrator | } 2026-01-09 01:20:56.962908 | orchestrator | ] 2026-01-09 01:20:56.962917 | orchestrator | } 2026-01-09 01:20:56.962926 | orchestrator | } 2026-01-09 01:20:56.962954 | orchestrator | 2026-01-09 01:20:56.962964 | orchestrator | # Ceph free space status 2026-01-09 01:20:56.962973 | orchestrator | 2026-01-09 01:20:56.962982 | orchestrator | + echo 2026-01-09 01:20:56.962991 | orchestrator | + echo '# Ceph free space status' 2026-01-09 01:20:56.963000 | orchestrator | + echo 2026-01-09 01:20:56.963008 | orchestrator | + ceph df 2026-01-09 01:20:57.543961 | orchestrator | --- RAW STORAGE --- 2026-01-09 01:20:57.544086 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-01-09 01:20:57.544120 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-09 01:20:57.544134 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-09 01:20:57.544147 | orchestrator | 2026-01-09 01:20:57.544160 | orchestrator | --- POOLS --- 2026-01-09 01:20:57.544173 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-01-09 01:20:57.544187 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-01-09 01:20:57.544250 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-01-09 01:20:57.544262 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-01-09 01:20:57.544274 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-01-09 01:20:57.544286 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-01-09 01:20:57.544298 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-01-09 01:20:57.544309 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-01-09 01:20:57.544321 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-01-09 01:20:57.544332 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 52 GiB 2026-01-09 01:20:57.544343 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-01-09 01:20:57.544355 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-01-09 01:20:57.544368 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2026-01-09 01:20:57.544380 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-01-09 01:20:57.544393 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-01-09 01:20:57.593749 | orchestrator | ++ semver latest 5.0.0 2026-01-09 01:20:57.647807 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-09 01:20:57.647909 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-09 01:20:57.647925 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-01-09 01:20:57.647936 | orchestrator | + osism apply facts 2026-01-09 01:20:59.764537 | orchestrator | 2026-01-09 01:20:59 | INFO  | Task 5b413806-c5d4-44d8-8028-46d4b97a584e (facts) was prepared for execution. 2026-01-09 01:20:59.764641 | orchestrator | 2026-01-09 01:20:59 | INFO  | It takes a moment until task 5b413806-c5d4-44d8-8028-46d4b97a584e (facts) has been started and output is visible here. 2026-01-09 01:21:13.841459 | orchestrator | 2026-01-09 01:21:13.841597 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-09 01:21:13.841619 | orchestrator | 2026-01-09 01:21:13.841637 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-09 01:21:13.841653 | orchestrator | Friday 09 January 2026 01:21:04 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-01-09 01:21:13.841670 | orchestrator | ok: [testbed-manager] 2026-01-09 01:21:13.841688 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:13.841702 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:21:13.841716 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:21:13.841748 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:21:13.841777 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:21:13.841793 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:21:13.841836 | orchestrator | 2026-01-09 01:21:13.841853 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-09 01:21:13.841868 | orchestrator | Friday 09 January 2026 01:21:06 +0000 (0:00:01.643) 0:00:01.927 ******** 2026-01-09 01:21:13.841883 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:21:13.841898 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:13.841913 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:21:13.841926 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:21:13.841941 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:21:13.841956 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:21:13.841968 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:21:13.841982 | orchestrator | 2026-01-09 01:21:13.841997 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-09 01:21:13.842074 | orchestrator | 2026-01-09 01:21:13.842092 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-09 01:21:13.842107 | orchestrator | Friday 09 January 2026 01:21:07 +0000 (0:00:01.368) 0:00:03.295 ******** 2026-01-09 01:21:13.842123 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:21:13.842139 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:13.842155 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:21:13.842171 | orchestrator | ok: [testbed-manager] 2026-01-09 01:21:13.842230 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:21:13.842246 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:21:13.842260 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:21:13.842274 | orchestrator | 2026-01-09 01:21:13.842285 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-09 01:21:13.842296 | orchestrator | 2026-01-09 01:21:13.842305 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-09 01:21:13.842316 | orchestrator | Friday 09 January 2026 01:21:12 +0000 (0:00:05.394) 0:00:08.689 ******** 2026-01-09 01:21:13.842326 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:21:13.842336 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:13.842346 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:21:13.842357 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:21:13.842367 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:21:13.842378 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:21:13.842388 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:21:13.842400 | orchestrator | 2026-01-09 01:21:13.842409 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:21:13.842418 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:21:13.842428 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:21:13.842437 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:21:13.842446 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:21:13.842454 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:21:13.842462 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:21:13.842471 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:21:13.842480 | orchestrator | 2026-01-09 01:21:13.842488 | orchestrator | 2026-01-09 01:21:13.842499 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:21:13.842514 | orchestrator | Friday 09 January 2026 01:21:13 +0000 (0:00:00.594) 0:00:09.284 ******** 2026-01-09 01:21:13.842528 | orchestrator | =============================================================================== 2026-01-09 01:21:13.842558 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.39s 2026-01-09 01:21:13.842571 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.64s 2026-01-09 01:21:13.842586 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2026-01-09 01:21:13.842601 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-01-09 01:21:14.190910 | orchestrator | + osism validate ceph-mons 2026-01-09 01:21:47.688517 | orchestrator | 2026-01-09 01:21:47.688613 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-01-09 01:21:47.688624 | orchestrator | 2026-01-09 01:21:47.688630 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-09 01:21:47.688637 | orchestrator | Friday 09 January 2026 01:21:31 +0000 (0:00:00.459) 0:00:00.459 ******** 2026-01-09 01:21:47.688645 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:21:47.688652 | orchestrator | 2026-01-09 01:21:47.688656 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-09 01:21:47.688660 | orchestrator | Friday 09 January 2026 01:21:32 +0000 (0:00:00.923) 0:00:01.382 ******** 2026-01-09 01:21:47.688665 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:21:47.688669 | orchestrator | 2026-01-09 01:21:47.688672 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-09 01:21:47.688678 | orchestrator | Friday 09 January 2026 01:21:33 +0000 (0:00:01.040) 0:00:02.423 ******** 2026-01-09 01:21:47.688685 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.688693 | orchestrator | 2026-01-09 01:21:47.688702 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-09 01:21:47.688710 | orchestrator | Friday 09 January 2026 01:21:33 +0000 (0:00:00.135) 0:00:02.558 ******** 2026-01-09 01:21:47.688715 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.688721 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:21:47.688726 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:21:47.688732 | orchestrator | 2026-01-09 01:21:47.688751 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-09 01:21:47.688757 | orchestrator | Friday 09 January 2026 01:21:33 +0000 (0:00:00.336) 0:00:02.895 ******** 2026-01-09 01:21:47.688763 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:21:47.688769 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:21:47.688774 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.688780 | orchestrator | 2026-01-09 01:21:47.688786 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-09 01:21:47.688791 | orchestrator | Friday 09 January 2026 01:21:34 +0000 (0:00:01.133) 0:00:04.029 ******** 2026-01-09 01:21:47.688797 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.688804 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:21:47.688810 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:21:47.688816 | orchestrator | 2026-01-09 01:21:47.688823 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-09 01:21:47.688829 | orchestrator | Friday 09 January 2026 01:21:35 +0000 (0:00:00.310) 0:00:04.340 ******** 2026-01-09 01:21:47.688835 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.688841 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:21:47.688848 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:21:47.688854 | orchestrator | 2026-01-09 01:21:47.688862 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-09 01:21:47.688866 | orchestrator | Friday 09 January 2026 01:21:35 +0000 (0:00:00.556) 0:00:04.896 ******** 2026-01-09 01:21:47.688870 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.688874 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:21:47.688878 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:21:47.688881 | orchestrator | 2026-01-09 01:21:47.688885 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-01-09 01:21:47.688889 | orchestrator | Friday 09 January 2026 01:21:36 +0000 (0:00:00.332) 0:00:05.229 ******** 2026-01-09 01:21:47.688912 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.688916 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:21:47.688920 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:21:47.688924 | orchestrator | 2026-01-09 01:21:47.688927 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-01-09 01:21:47.688931 | orchestrator | Friday 09 January 2026 01:21:36 +0000 (0:00:00.327) 0:00:05.556 ******** 2026-01-09 01:21:47.688935 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.688939 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:21:47.688943 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:21:47.688946 | orchestrator | 2026-01-09 01:21:47.688950 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-09 01:21:47.688954 | orchestrator | Friday 09 January 2026 01:21:36 +0000 (0:00:00.484) 0:00:06.041 ******** 2026-01-09 01:21:47.688957 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.688961 | orchestrator | 2026-01-09 01:21:47.688965 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-09 01:21:47.688968 | orchestrator | Friday 09 January 2026 01:21:37 +0000 (0:00:00.250) 0:00:06.291 ******** 2026-01-09 01:21:47.688972 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.688976 | orchestrator | 2026-01-09 01:21:47.688980 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-09 01:21:47.688984 | orchestrator | Friday 09 January 2026 01:21:37 +0000 (0:00:00.259) 0:00:06.550 ******** 2026-01-09 01:21:47.688987 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.688991 | orchestrator | 2026-01-09 01:21:47.688995 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:21:47.688999 | orchestrator | Friday 09 January 2026 01:21:37 +0000 (0:00:00.250) 0:00:06.801 ******** 2026-01-09 01:21:47.689002 | orchestrator | 2026-01-09 01:21:47.689006 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:21:47.689009 | orchestrator | Friday 09 January 2026 01:21:37 +0000 (0:00:00.075) 0:00:06.876 ******** 2026-01-09 01:21:47.689013 | orchestrator | 2026-01-09 01:21:47.689017 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:21:47.689021 | orchestrator | Friday 09 January 2026 01:21:37 +0000 (0:00:00.073) 0:00:06.949 ******** 2026-01-09 01:21:47.689024 | orchestrator | 2026-01-09 01:21:47.689028 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-09 01:21:47.689032 | orchestrator | Friday 09 January 2026 01:21:37 +0000 (0:00:00.080) 0:00:07.030 ******** 2026-01-09 01:21:47.689035 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.689039 | orchestrator | 2026-01-09 01:21:47.689042 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-09 01:21:47.689046 | orchestrator | Friday 09 January 2026 01:21:38 +0000 (0:00:00.351) 0:00:07.381 ******** 2026-01-09 01:21:47.689050 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.689054 | orchestrator | 2026-01-09 01:21:47.689071 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-01-09 01:21:47.689076 | orchestrator | Friday 09 January 2026 01:21:38 +0000 (0:00:00.245) 0:00:07.627 ******** 2026-01-09 01:21:47.689081 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.689085 | orchestrator | 2026-01-09 01:21:47.689090 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-01-09 01:21:47.689094 | orchestrator | Friday 09 January 2026 01:21:38 +0000 (0:00:00.138) 0:00:07.766 ******** 2026-01-09 01:21:47.689099 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:21:47.689103 | orchestrator | 2026-01-09 01:21:47.689107 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-01-09 01:21:47.689112 | orchestrator | Friday 09 January 2026 01:21:40 +0000 (0:00:01.719) 0:00:09.485 ******** 2026-01-09 01:21:47.689116 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.689120 | orchestrator | 2026-01-09 01:21:47.689124 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-01-09 01:21:47.689134 | orchestrator | Friday 09 January 2026 01:21:40 +0000 (0:00:00.513) 0:00:09.999 ******** 2026-01-09 01:21:47.689139 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.689143 | orchestrator | 2026-01-09 01:21:47.689147 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-01-09 01:21:47.689154 | orchestrator | Friday 09 January 2026 01:21:40 +0000 (0:00:00.143) 0:00:10.143 ******** 2026-01-09 01:21:47.689160 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.689166 | orchestrator | 2026-01-09 01:21:47.689199 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-01-09 01:21:47.689206 | orchestrator | Friday 09 January 2026 01:21:41 +0000 (0:00:00.329) 0:00:10.473 ******** 2026-01-09 01:21:47.689213 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.689219 | orchestrator | 2026-01-09 01:21:47.689226 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-01-09 01:21:47.689231 | orchestrator | Friday 09 January 2026 01:21:41 +0000 (0:00:00.345) 0:00:10.818 ******** 2026-01-09 01:21:47.689235 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.689239 | orchestrator | 2026-01-09 01:21:47.689244 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-01-09 01:21:47.689249 | orchestrator | Friday 09 January 2026 01:21:41 +0000 (0:00:00.133) 0:00:10.952 ******** 2026-01-09 01:21:47.689253 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.689258 | orchestrator | 2026-01-09 01:21:47.689262 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-01-09 01:21:47.689267 | orchestrator | Friday 09 January 2026 01:21:41 +0000 (0:00:00.141) 0:00:11.094 ******** 2026-01-09 01:21:47.689272 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.689276 | orchestrator | 2026-01-09 01:21:47.689281 | orchestrator | TASK [Gather status data] ****************************************************** 2026-01-09 01:21:47.689285 | orchestrator | Friday 09 January 2026 01:21:42 +0000 (0:00:00.143) 0:00:11.237 ******** 2026-01-09 01:21:47.689290 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:21:47.689294 | orchestrator | 2026-01-09 01:21:47.689299 | orchestrator | TASK [Set health test data] **************************************************** 2026-01-09 01:21:47.689303 | orchestrator | Friday 09 January 2026 01:21:43 +0000 (0:00:01.423) 0:00:12.660 ******** 2026-01-09 01:21:47.689308 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.689312 | orchestrator | 2026-01-09 01:21:47.689316 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-01-09 01:21:47.689321 | orchestrator | Friday 09 January 2026 01:21:43 +0000 (0:00:00.295) 0:00:12.955 ******** 2026-01-09 01:21:47.689325 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.689329 | orchestrator | 2026-01-09 01:21:47.689333 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-01-09 01:21:47.689338 | orchestrator | Friday 09 January 2026 01:21:43 +0000 (0:00:00.150) 0:00:13.106 ******** 2026-01-09 01:21:47.689342 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:21:47.689346 | orchestrator | 2026-01-09 01:21:47.689350 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-01-09 01:21:47.689356 | orchestrator | Friday 09 January 2026 01:21:44 +0000 (0:00:00.152) 0:00:13.259 ******** 2026-01-09 01:21:47.689362 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.689368 | orchestrator | 2026-01-09 01:21:47.689377 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-01-09 01:21:47.689385 | orchestrator | Friday 09 January 2026 01:21:44 +0000 (0:00:00.345) 0:00:13.604 ******** 2026-01-09 01:21:47.689391 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.689396 | orchestrator | 2026-01-09 01:21:47.689403 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-09 01:21:47.689417 | orchestrator | Friday 09 January 2026 01:21:44 +0000 (0:00:00.148) 0:00:13.753 ******** 2026-01-09 01:21:47.689423 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:21:47.689429 | orchestrator | 2026-01-09 01:21:47.689435 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-09 01:21:47.689447 | orchestrator | Friday 09 January 2026 01:21:44 +0000 (0:00:00.300) 0:00:14.054 ******** 2026-01-09 01:21:47.689453 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:21:47.689459 | orchestrator | 2026-01-09 01:21:47.689469 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-09 01:21:47.689475 | orchestrator | Friday 09 January 2026 01:21:45 +0000 (0:00:00.266) 0:00:14.320 ******** 2026-01-09 01:21:47.689482 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:21:47.689488 | orchestrator | 2026-01-09 01:21:47.689495 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-09 01:21:47.689502 | orchestrator | Friday 09 January 2026 01:21:46 +0000 (0:00:01.810) 0:00:16.131 ******** 2026-01-09 01:21:47.689506 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:21:47.689510 | orchestrator | 2026-01-09 01:21:47.689513 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-09 01:21:47.689517 | orchestrator | Friday 09 January 2026 01:21:47 +0000 (0:00:00.273) 0:00:16.404 ******** 2026-01-09 01:21:47.689521 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:21:47.689524 | orchestrator | 2026-01-09 01:21:47.689533 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:21:50.496874 | orchestrator | Friday 09 January 2026 01:21:47 +0000 (0:00:00.264) 0:00:16.669 ******** 2026-01-09 01:21:50.496979 | orchestrator | 2026-01-09 01:21:50.496988 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:21:50.496993 | orchestrator | Friday 09 January 2026 01:21:47 +0000 (0:00:00.078) 0:00:16.747 ******** 2026-01-09 01:21:50.496997 | orchestrator | 2026-01-09 01:21:50.497001 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:21:50.497006 | orchestrator | Friday 09 January 2026 01:21:47 +0000 (0:00:00.070) 0:00:16.817 ******** 2026-01-09 01:21:50.497009 | orchestrator | 2026-01-09 01:21:50.497013 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-09 01:21:50.497018 | orchestrator | Friday 09 January 2026 01:21:47 +0000 (0:00:00.074) 0:00:16.891 ******** 2026-01-09 01:21:50.497022 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:21:50.497026 | orchestrator | 2026-01-09 01:21:50.497030 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-09 01:21:50.497034 | orchestrator | Friday 09 January 2026 01:21:49 +0000 (0:00:01.571) 0:00:18.463 ******** 2026-01-09 01:21:50.497038 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-09 01:21:50.497042 | orchestrator |  "msg": [ 2026-01-09 01:21:50.497047 | orchestrator |  "Validator run completed.", 2026-01-09 01:21:50.497067 | orchestrator |  "You can find the report file here:", 2026-01-09 01:21:50.497072 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-01-09T01:21:32+00:00-report.json", 2026-01-09 01:21:50.497077 | orchestrator |  "on the following host:", 2026-01-09 01:21:50.497081 | orchestrator |  "testbed-manager" 2026-01-09 01:21:50.497085 | orchestrator |  ] 2026-01-09 01:21:50.497101 | orchestrator | } 2026-01-09 01:21:50.497150 | orchestrator | 2026-01-09 01:21:50.497155 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:21:50.497161 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-09 01:21:50.497186 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:21:50.497194 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:21:50.497198 | orchestrator | 2026-01-09 01:21:50.497202 | orchestrator | 2026-01-09 01:21:50.497206 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:21:50.497225 | orchestrator | Friday 09 January 2026 01:21:50 +0000 (0:00:00.878) 0:00:19.341 ******** 2026-01-09 01:21:50.497229 | orchestrator | =============================================================================== 2026-01-09 01:21:50.497232 | orchestrator | Aggregate test results step one ----------------------------------------- 1.81s 2026-01-09 01:21:50.497237 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.72s 2026-01-09 01:21:50.497240 | orchestrator | Write report file ------------------------------------------------------- 1.57s 2026-01-09 01:21:50.497244 | orchestrator | Gather status data ------------------------------------------------------ 1.42s 2026-01-09 01:21:50.497248 | orchestrator | Get container info ------------------------------------------------------ 1.13s 2026-01-09 01:21:50.497252 | orchestrator | Create report output directory ------------------------------------------ 1.04s 2026-01-09 01:21:50.497255 | orchestrator | Get timestamp for report file ------------------------------------------- 0.92s 2026-01-09 01:21:50.497259 | orchestrator | Print report file information ------------------------------------------- 0.88s 2026-01-09 01:21:50.497263 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2026-01-09 01:21:50.497266 | orchestrator | Set quorum test data ---------------------------------------------------- 0.51s 2026-01-09 01:21:50.497270 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.48s 2026-01-09 01:21:50.497274 | orchestrator | Print report file information ------------------------------------------- 0.35s 2026-01-09 01:21:50.497278 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.35s 2026-01-09 01:21:50.497281 | orchestrator | Fail cluster-health if health is not acceptable (strict) ---------------- 0.35s 2026-01-09 01:21:50.497285 | orchestrator | Prepare test data for container existance test -------------------------- 0.34s 2026-01-09 01:21:50.497289 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-01-09 01:21:50.497292 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-01-09 01:21:50.497296 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.33s 2026-01-09 01:21:50.497300 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-01-09 01:21:50.497304 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.30s 2026-01-09 01:21:50.852797 | orchestrator | + osism validate ceph-mgrs 2026-01-09 01:22:22.689376 | orchestrator | 2026-01-09 01:22:22.689481 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-01-09 01:22:22.689491 | orchestrator | 2026-01-09 01:22:22.689498 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-09 01:22:22.689505 | orchestrator | Friday 09 January 2026 01:22:07 +0000 (0:00:00.474) 0:00:00.474 ******** 2026-01-09 01:22:22.689512 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:22:22.689519 | orchestrator | 2026-01-09 01:22:22.689525 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-09 01:22:22.689532 | orchestrator | Friday 09 January 2026 01:22:08 +0000 (0:00:00.883) 0:00:01.357 ******** 2026-01-09 01:22:22.689538 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:22:22.689544 | orchestrator | 2026-01-09 01:22:22.689550 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-09 01:22:22.689557 | orchestrator | Friday 09 January 2026 01:22:09 +0000 (0:00:01.067) 0:00:02.425 ******** 2026-01-09 01:22:22.689563 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:22:22.689570 | orchestrator | 2026-01-09 01:22:22.689576 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-09 01:22:22.689582 | orchestrator | Friday 09 January 2026 01:22:09 +0000 (0:00:00.132) 0:00:02.558 ******** 2026-01-09 01:22:22.689588 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:22:22.689595 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:22:22.689601 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:22:22.689607 | orchestrator | 2026-01-09 01:22:22.689634 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-09 01:22:22.689641 | orchestrator | Friday 09 January 2026 01:22:10 +0000 (0:00:00.308) 0:00:02.867 ******** 2026-01-09 01:22:22.689647 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:22:22.689653 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:22:22.689659 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:22:22.689665 | orchestrator | 2026-01-09 01:22:22.689671 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-09 01:22:22.689677 | orchestrator | Friday 09 January 2026 01:22:11 +0000 (0:00:01.066) 0:00:03.934 ******** 2026-01-09 01:22:22.689697 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:22:22.689704 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:22:22.689710 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:22:22.689716 | orchestrator | 2026-01-09 01:22:22.689722 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-09 01:22:22.689728 | orchestrator | Friday 09 January 2026 01:22:11 +0000 (0:00:00.300) 0:00:04.234 ******** 2026-01-09 01:22:22.689734 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:22:22.689740 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:22:22.689746 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:22:22.689753 | orchestrator | 2026-01-09 01:22:22.689759 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-09 01:22:22.689765 | orchestrator | Friday 09 January 2026 01:22:12 +0000 (0:00:00.518) 0:00:04.752 ******** 2026-01-09 01:22:22.689772 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:22:22.689778 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:22:22.689784 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:22:22.689791 | orchestrator | 2026-01-09 01:22:22.689797 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-01-09 01:22:22.689803 | orchestrator | Friday 09 January 2026 01:22:12 +0000 (0:00:00.307) 0:00:05.060 ******** 2026-01-09 01:22:22.689810 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:22:22.689817 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:22:22.689823 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:22:22.689829 | orchestrator | 2026-01-09 01:22:22.689836 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-01-09 01:22:22.689842 | orchestrator | Friday 09 January 2026 01:22:12 +0000 (0:00:00.314) 0:00:05.374 ******** 2026-01-09 01:22:22.689846 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:22:22.689850 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:22:22.689853 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:22:22.689857 | orchestrator | 2026-01-09 01:22:22.689861 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-09 01:22:22.689864 | orchestrator | Friday 09 January 2026 01:22:13 +0000 (0:00:00.529) 0:00:05.903 ******** 2026-01-09 01:22:22.689869 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:22:22.689875 | orchestrator | 2026-01-09 01:22:22.689881 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-09 01:22:22.689888 | orchestrator | Friday 09 January 2026 01:22:13 +0000 (0:00:00.245) 0:00:06.149 ******** 2026-01-09 01:22:22.689897 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:22:22.689904 | orchestrator | 2026-01-09 01:22:22.689910 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-09 01:22:22.689917 | orchestrator | Friday 09 January 2026 01:22:13 +0000 (0:00:00.286) 0:00:06.435 ******** 2026-01-09 01:22:22.689923 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:22:22.689929 | orchestrator | 2026-01-09 01:22:22.689935 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:22:22.689941 | orchestrator | Friday 09 January 2026 01:22:14 +0000 (0:00:00.230) 0:00:06.666 ******** 2026-01-09 01:22:22.689946 | orchestrator | 2026-01-09 01:22:22.689952 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:22:22.689958 | orchestrator | Friday 09 January 2026 01:22:14 +0000 (0:00:00.070) 0:00:06.736 ******** 2026-01-09 01:22:22.689963 | orchestrator | 2026-01-09 01:22:22.689969 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:22:22.689985 | orchestrator | Friday 09 January 2026 01:22:14 +0000 (0:00:00.072) 0:00:06.808 ******** 2026-01-09 01:22:22.689992 | orchestrator | 2026-01-09 01:22:22.689998 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-09 01:22:22.690004 | orchestrator | Friday 09 January 2026 01:22:14 +0000 (0:00:00.093) 0:00:06.902 ******** 2026-01-09 01:22:22.690010 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:22:22.690058 | orchestrator | 2026-01-09 01:22:22.690066 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-09 01:22:22.690073 | orchestrator | Friday 09 January 2026 01:22:14 +0000 (0:00:00.273) 0:00:07.175 ******** 2026-01-09 01:22:22.690078 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:22:22.690084 | orchestrator | 2026-01-09 01:22:22.690111 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-01-09 01:22:22.690117 | orchestrator | Friday 09 January 2026 01:22:14 +0000 (0:00:00.250) 0:00:07.425 ******** 2026-01-09 01:22:22.690124 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:22:22.690136 | orchestrator | 2026-01-09 01:22:22.690142 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-01-09 01:22:22.690223 | orchestrator | Friday 09 January 2026 01:22:14 +0000 (0:00:00.117) 0:00:07.543 ******** 2026-01-09 01:22:22.690232 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:22:22.690238 | orchestrator | 2026-01-09 01:22:22.690245 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-01-09 01:22:22.690251 | orchestrator | Friday 09 January 2026 01:22:16 +0000 (0:00:02.050) 0:00:09.593 ******** 2026-01-09 01:22:22.690257 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:22:22.690264 | orchestrator | 2026-01-09 01:22:22.690270 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-01-09 01:22:22.690276 | orchestrator | Friday 09 January 2026 01:22:17 +0000 (0:00:00.495) 0:00:10.088 ******** 2026-01-09 01:22:22.690282 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:22:22.690287 | orchestrator | 2026-01-09 01:22:22.690294 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-01-09 01:22:22.690301 | orchestrator | Friday 09 January 2026 01:22:17 +0000 (0:00:00.360) 0:00:10.448 ******** 2026-01-09 01:22:22.690307 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:22:22.690313 | orchestrator | 2026-01-09 01:22:22.690319 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-01-09 01:22:22.690325 | orchestrator | Friday 09 January 2026 01:22:17 +0000 (0:00:00.150) 0:00:10.599 ******** 2026-01-09 01:22:22.690331 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:22:22.690338 | orchestrator | 2026-01-09 01:22:22.690346 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-09 01:22:22.690351 | orchestrator | Friday 09 January 2026 01:22:18 +0000 (0:00:00.154) 0:00:10.754 ******** 2026-01-09 01:22:22.690357 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:22:22.690363 | orchestrator | 2026-01-09 01:22:22.690369 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-09 01:22:22.690374 | orchestrator | Friday 09 January 2026 01:22:18 +0000 (0:00:00.250) 0:00:11.005 ******** 2026-01-09 01:22:22.690380 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:22:22.690386 | orchestrator | 2026-01-09 01:22:22.690391 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-09 01:22:22.690397 | orchestrator | Friday 09 January 2026 01:22:18 +0000 (0:00:00.258) 0:00:11.263 ******** 2026-01-09 01:22:22.690403 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:22:22.690409 | orchestrator | 2026-01-09 01:22:22.690426 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-09 01:22:22.690433 | orchestrator | Friday 09 January 2026 01:22:19 +0000 (0:00:01.299) 0:00:12.563 ******** 2026-01-09 01:22:22.690440 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:22:22.690447 | orchestrator | 2026-01-09 01:22:22.690463 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-09 01:22:22.690470 | orchestrator | Friday 09 January 2026 01:22:20 +0000 (0:00:00.249) 0:00:12.812 ******** 2026-01-09 01:22:22.690477 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:22:22.690486 | orchestrator | 2026-01-09 01:22:22.690492 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:22:22.690497 | orchestrator | Friday 09 January 2026 01:22:20 +0000 (0:00:00.259) 0:00:13.072 ******** 2026-01-09 01:22:22.690503 | orchestrator | 2026-01-09 01:22:22.690509 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:22:22.690516 | orchestrator | Friday 09 January 2026 01:22:20 +0000 (0:00:00.073) 0:00:13.146 ******** 2026-01-09 01:22:22.690523 | orchestrator | 2026-01-09 01:22:22.690530 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:22:22.690537 | orchestrator | Friday 09 January 2026 01:22:20 +0000 (0:00:00.071) 0:00:13.218 ******** 2026-01-09 01:22:22.690543 | orchestrator | 2026-01-09 01:22:22.690550 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-09 01:22:22.690555 | orchestrator | Friday 09 January 2026 01:22:20 +0000 (0:00:00.284) 0:00:13.502 ******** 2026-01-09 01:22:22.690560 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-09 01:22:22.690566 | orchestrator | 2026-01-09 01:22:22.690572 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-09 01:22:22.690578 | orchestrator | Friday 09 January 2026 01:22:22 +0000 (0:00:01.401) 0:00:14.904 ******** 2026-01-09 01:22:22.690584 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-09 01:22:22.690589 | orchestrator |  "msg": [ 2026-01-09 01:22:22.690595 | orchestrator |  "Validator run completed.", 2026-01-09 01:22:22.690601 | orchestrator |  "You can find the report file here:", 2026-01-09 01:22:22.690607 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-01-09T01:22:08+00:00-report.json", 2026-01-09 01:22:22.690615 | orchestrator |  "on the following host:", 2026-01-09 01:22:22.690621 | orchestrator |  "testbed-manager" 2026-01-09 01:22:22.690627 | orchestrator |  ] 2026-01-09 01:22:22.690633 | orchestrator | } 2026-01-09 01:22:22.690639 | orchestrator | 2026-01-09 01:22:22.690646 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:22:22.690654 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-09 01:22:22.690662 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:22:22.690682 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:22:23.023430 | orchestrator | 2026-01-09 01:22:23.023510 | orchestrator | 2026-01-09 01:22:23.023516 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:22:23.023523 | orchestrator | Friday 09 January 2026 01:22:22 +0000 (0:00:00.416) 0:00:15.320 ******** 2026-01-09 01:22:23.023527 | orchestrator | =============================================================================== 2026-01-09 01:22:23.023531 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.05s 2026-01-09 01:22:23.023536 | orchestrator | Write report file ------------------------------------------------------- 1.40s 2026-01-09 01:22:23.023541 | orchestrator | Aggregate test results step one ----------------------------------------- 1.30s 2026-01-09 01:22:23.023545 | orchestrator | Create report output directory ------------------------------------------ 1.07s 2026-01-09 01:22:23.023549 | orchestrator | Get container info ------------------------------------------------------ 1.07s 2026-01-09 01:22:23.023553 | orchestrator | Get timestamp for report file ------------------------------------------- 0.88s 2026-01-09 01:22:23.023556 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.53s 2026-01-09 01:22:23.023580 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2026-01-09 01:22:23.023584 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.50s 2026-01-09 01:22:23.023588 | orchestrator | Flush handlers ---------------------------------------------------------- 0.43s 2026-01-09 01:22:23.023592 | orchestrator | Print report file information ------------------------------------------- 0.42s 2026-01-09 01:22:23.023661 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.36s 2026-01-09 01:22:23.023665 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2026-01-09 01:22:23.023669 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-01-09 01:22:23.023698 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-01-09 01:22:23.023703 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-01-09 01:22:23.023708 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-01-09 01:22:23.023712 | orchestrator | Print report file information ------------------------------------------- 0.27s 2026-01-09 01:22:23.023716 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-01-09 01:22:23.023719 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2026-01-09 01:22:23.363418 | orchestrator | + osism validate ceph-osds 2026-01-09 01:22:45.255968 | orchestrator | 2026-01-09 01:22:45.256058 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-01-09 01:22:45.256065 | orchestrator | 2026-01-09 01:22:45.256070 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-09 01:22:45.256076 | orchestrator | Friday 09 January 2026 01:22:40 +0000 (0:00:00.489) 0:00:00.489 ******** 2026-01-09 01:22:45.256080 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-09 01:22:45.256085 | orchestrator | 2026-01-09 01:22:45.256089 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-09 01:22:45.256093 | orchestrator | Friday 09 January 2026 01:22:41 +0000 (0:00:00.858) 0:00:01.348 ******** 2026-01-09 01:22:45.256097 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-09 01:22:45.256100 | orchestrator | 2026-01-09 01:22:45.256104 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-09 01:22:45.256108 | orchestrator | Friday 09 January 2026 01:22:41 +0000 (0:00:00.525) 0:00:01.874 ******** 2026-01-09 01:22:45.256111 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-09 01:22:45.256115 | orchestrator | 2026-01-09 01:22:45.256119 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-09 01:22:45.256122 | orchestrator | Friday 09 January 2026 01:22:42 +0000 (0:00:00.767) 0:00:02.641 ******** 2026-01-09 01:22:45.256126 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:45.256131 | orchestrator | 2026-01-09 01:22:45.256135 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-09 01:22:45.256180 | orchestrator | Friday 09 January 2026 01:22:42 +0000 (0:00:00.134) 0:00:02.775 ******** 2026-01-09 01:22:45.256184 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:45.256188 | orchestrator | 2026-01-09 01:22:45.256192 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-09 01:22:45.256196 | orchestrator | Friday 09 January 2026 01:22:43 +0000 (0:00:00.161) 0:00:02.937 ******** 2026-01-09 01:22:45.256199 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:45.256203 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:22:45.256207 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:22:45.256210 | orchestrator | 2026-01-09 01:22:45.256214 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-09 01:22:45.256218 | orchestrator | Friday 09 January 2026 01:22:43 +0000 (0:00:00.336) 0:00:03.274 ******** 2026-01-09 01:22:45.256222 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:45.256245 | orchestrator | 2026-01-09 01:22:45.256250 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-09 01:22:45.256254 | orchestrator | Friday 09 January 2026 01:22:43 +0000 (0:00:00.158) 0:00:03.432 ******** 2026-01-09 01:22:45.256258 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:45.256261 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:45.256265 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:45.256269 | orchestrator | 2026-01-09 01:22:45.256273 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-01-09 01:22:45.256277 | orchestrator | Friday 09 January 2026 01:22:43 +0000 (0:00:00.327) 0:00:03.760 ******** 2026-01-09 01:22:45.256280 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:45.256284 | orchestrator | 2026-01-09 01:22:45.256288 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-09 01:22:45.256291 | orchestrator | Friday 09 January 2026 01:22:44 +0000 (0:00:00.623) 0:00:04.383 ******** 2026-01-09 01:22:45.256295 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:45.256299 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:45.256303 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:45.256307 | orchestrator | 2026-01-09 01:22:45.256310 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-01-09 01:22:45.256314 | orchestrator | Friday 09 January 2026 01:22:44 +0000 (0:00:00.503) 0:00:04.887 ******** 2026-01-09 01:22:45.256320 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8ed81fe85f62b5bacb65db5d799b4036507b39b019bb7a240af804e47c1e561b', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-09 01:22:45.256326 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd2e92e820469aa1eb77cc02baa954f882da5d8138ecb7945e8e57f9860e7d21d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-09 01:22:45.256362 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f4e194db64d8272d9c3c973c6b8bb9e72e8f6198861d8a5ecebeeb264c892399', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-01-09 01:22:45.256368 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ea12d32bf0dbf7a7106661b08758863bb05433d2c7d793f084901e2d2e489302', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-09 01:22:45.256389 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fc37cdbc546c16a9b55fbd8032889058b52fcc4b824ae3033b48c31d1fcbaa5f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-09 01:22:45.256405 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b19c09f800075c782c0603f22cda16788633d6bf8f5a38bdbcf4754b004e6e12', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-09 01:22:45.256410 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd6b843c06085925231c21b4ac6573cef709f7eace26deb26d6d1e51415d78a4a', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2026-01-09 01:22:45.256413 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1a4878b26810773b97042f05b7f8f9ecefa8f9593ff064a79a0f81ebc804ed63', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-09 01:22:45.256417 | orchestrator | skipping: [testbed-node-3] => (item={'id': '626dc866e21438260c7079a30ce4b6aef34d68eb1f950fc5a7721764fc369dfc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-09 01:22:45.256427 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cae5dd3e0568a199b16a4e820b8426c7419207a80df4c055d5a92d61ad1fff75', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2026-01-09 01:22:45.256450 | orchestrator | ok: [testbed-node-3] => (item={'id': 'd54eb61eff79495f1bb8f2c852eaffd0189bbff9d970f83d798d98da893d9a4b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 26 minutes'}) 2026-01-09 01:22:45.256455 | orchestrator | ok: [testbed-node-3] => (item={'id': '4128d7f209d1f85110e1fc1e54425a6d3eed8329f37ab5405a3b4e4cff8e5df1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 26 minutes'}) 2026-01-09 01:22:45.256459 | orchestrator | skipping: [testbed-node-3] => (item={'id': '693d2ef4eccc108679a5476530edfa892498349b4fec0dd10b6da18b1ad2b494', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-09 01:22:45.256463 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a0b6d67a11655d99fe824dce87c37de0bc84064654b123d6b530a0cfcfa5b5bc', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-01-09 01:22:45.256469 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6fb2f4abee88f8da43a1e19392635fdbb0118cfc7e7909ea9c64225c5a17f42e', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-01-09 01:22:45.256473 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c3ba12e68a1f1166e0f3098b2bf7f4a3af5ac2f10c94196bea09f2c2deb3f1bb', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-09 01:22:45.256477 | orchestrator | skipping: [testbed-node-3] => (item={'id': '12b0c953e47b83efd175572bcdd094851bddbc805fea3709c493750651fd326f', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-09 01:22:45.256481 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd77aa42c2bec2ef17bf1c3344e3fe8b97bfe928adcdcb6cdb72b2f9e380bdca4', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-09 01:22:45.256485 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7d9316ca4356bdb9d8ae979cb5ae0e8b9f3f0f97415cde54012c3c5215f0b80a', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-09 01:22:45.256494 | orchestrator | skipping: [testbed-node-4] => (item={'id': '99c36d98f3092ad7b1a3f1787793074a9c284b444d069c0f8b910f3f79323ebf', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-09 01:22:45.256503 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f71431636ad8fa0f15c129b2fc4980559a68e5812280dcf5e0e3f9348b8bd4b7', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-01-09 01:22:45.256512 | orchestrator | skipping: [testbed-node-4] => (item={'id': '35c4d3e47ac2ec82ec75fe875116c46569881e4e9285f0bb9ee2ee8ff5a7199f', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-09 01:22:45.506364 | orchestrator | skipping: [testbed-node-4] => (item={'id': '28c558881effa59e80d963ef8a3cd2c697bf22f921f0265b01d2952f89344cfe', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-09 01:22:45.506472 | orchestrator | skipping: [testbed-node-4] => (item={'id': '87020c0e4e59ae2d06a931e212098ca7c8035a47de1a0edeb3ad5cfbdb1b1f62', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-09 01:22:45.506484 | orchestrator | skipping: [testbed-node-4] => (item={'id': '46f8786435123bcc86af517b14d4ed8df846e13b82d8cfdcebe6ade3880f9cfa', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2026-01-09 01:22:45.506491 | orchestrator | skipping: [testbed-node-4] => (item={'id': '54d8a440dd2b7ed7a2469c96daaf3ee9f700da067993d80b34e83e9ea45512c2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-09 01:22:45.506497 | orchestrator | skipping: [testbed-node-4] => (item={'id': '92e65fbc3bc66d4d786f73f06687db7fce84172f48cf6511f52fdaba79ae5001', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-09 01:22:45.506505 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0897a09af1f5df7f88faa9f9548e59ba0cba38226e44ca0c9d85509a89f88818', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2026-01-09 01:22:45.506513 | orchestrator | ok: [testbed-node-4] => (item={'id': '01ffdf6849715abb59f98d2d28365fed6a41ed126af57ab490c38f46a2002f03', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 26 minutes'}) 2026-01-09 01:22:45.506522 | orchestrator | ok: [testbed-node-4] => (item={'id': '63052f8f7237bac19dd5ea164066ab3d75842528dea97b1fe4e03bc04815355c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 26 minutes'}) 2026-01-09 01:22:45.506528 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ef1b57b557a85b6a700f26fb00d89254b06c4a82e565acc7a3dae95519600d81', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-09 01:22:45.506536 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'febcbf6e11d4cbb3d0c18e8e50b36c3c4bc9817c3f284414de654b0ed204c94b', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-01-09 01:22:45.506542 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0cdd0734908157e17aa4693951ae735d8a664d515f532ce02fd582141407bf85', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-01-09 01:22:45.506547 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1bc877a8e4ec1f3abcfea911caea02009b4f9acf6393a8b60b5da3437decad4b', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-09 01:22:45.506551 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c120b163eaff40d41d0e95ad381a99903e0a610fc163f19b0d35fa214c340ea9', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-09 01:22:45.506566 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bc3d5fdf8e0625b2dc52d6f7c632cf4a1f1f45e56eb28f4663630576d727b911', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-09 01:22:45.506570 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2e87f75bba293964f334e09f05db7b659c95de5c0fefeec739ac24a589edc3a8', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-09 01:22:45.506588 | orchestrator | skipping: [testbed-node-5] => (item={'id': '727d51911a5e62a152010898e7243794f80426e3fa4899919c83d161318f5077', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-09 01:22:45.506629 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1c4e4ea258c98f1f1036563856bf1380f8bc1fe8598552cb70817c182287ff7c', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-01-09 01:22:45.506634 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a426aa1483eae96e568b0b8cc2ff25a2e50afa454d53f6e42935ae7f1cde1c7', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-09 01:22:45.506638 | orchestrator | skipping: [testbed-node-5] => (item={'id': '55190a20cfd80333a551043e8725b4206ea1a68db6cdfec79fb26b28e7c64df2', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-09 01:22:45.506641 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0a27aa982d29cf5f444b10b5a3e33523a868ea9c17d6ee168da60fe3e125f7cd', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-09 01:22:45.506645 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0ed3bdcd5c4e78fdf5b8bc56e9f7245250b663756cd4bc1bcb333ea3b42247d2', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2026-01-09 01:22:45.506649 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2068d5e9eccde9e149b0a21bbc8dfa39e2f60db7450fb8af345cfd1d4ea66c55', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-09 01:22:45.506653 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e2061fdd322f1e0dbb09a9b48e5a00cac4bd3b9149f1e49b2fd3c08e2be02184', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-09 01:22:45.506657 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e55ade473d311a0f7bfbeca53fd3ddb6f6273bdf4b9ac7d550cafd562c8964d5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2026-01-09 01:22:45.506660 | orchestrator | ok: [testbed-node-5] => (item={'id': '599811e405187f3d5c863ec215841e16c85397a562bd0d60e884fdcca2d834d0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 26 minutes'}) 2026-01-09 01:22:45.506664 | orchestrator | ok: [testbed-node-5] => (item={'id': '4681f56a9ca266ad5f329ab5de98d798db4be7de1aa9a6aeb1e28f26c847f0e7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 26 minutes'}) 2026-01-09 01:22:45.506668 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f00922f12e4d55e129c85e1e6624b223b7919ac603a02676bcd40ea22da93fd6', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-09 01:22:45.506672 | orchestrator | skipping: [testbed-node-5] => (item={'id': '03049609ef9c034f377ec1c4d72c8f191cfa3d89bda2f11c161fb69a8ab993f2', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-01-09 01:22:45.506679 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'df9c57b21704398759b0ca35c5d88947176755f5d3fb31e746adb423cfa60927', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-01-09 01:22:45.506686 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6818082000af6ddac09adbe999e43503342f912b3988b842905c6f1cb9345ae2', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-09 01:22:45.506690 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cae2738bc39e9ed1449bda55a774e5030ee8816ccca62be39363d996f6b948e1', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-09 01:22:45.506698 | orchestrator | skipping: [testbed-node-5] => (item={'id': '25da3b78e064352f7800d4689aa8c31661b6f6f5f54fcd1749c713484b2779e6', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2026-01-09 01:22:59.718729 | orchestrator | 2026-01-09 01:22:59.718840 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-01-09 01:22:59.718857 | orchestrator | Friday 09 January 2026 01:22:45 +0000 (0:00:00.503) 0:00:05.390 ******** 2026-01-09 01:22:59.718871 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.718884 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:59.718894 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:59.718904 | orchestrator | 2026-01-09 01:22:59.718914 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-01-09 01:22:59.718924 | orchestrator | Friday 09 January 2026 01:22:45 +0000 (0:00:00.321) 0:00:05.711 ******** 2026-01-09 01:22:59.718934 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.718945 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:22:59.718962 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:22:59.718978 | orchestrator | 2026-01-09 01:22:59.718995 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-01-09 01:22:59.719012 | orchestrator | Friday 09 January 2026 01:22:46 +0000 (0:00:00.544) 0:00:06.256 ******** 2026-01-09 01:22:59.719027 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.719042 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:59.719057 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:59.719070 | orchestrator | 2026-01-09 01:22:59.719086 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-09 01:22:59.719115 | orchestrator | Friday 09 January 2026 01:22:46 +0000 (0:00:00.317) 0:00:06.573 ******** 2026-01-09 01:22:59.719176 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.719206 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:59.719239 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:59.719271 | orchestrator | 2026-01-09 01:22:59.719303 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-01-09 01:22:59.719339 | orchestrator | Friday 09 January 2026 01:22:46 +0000 (0:00:00.294) 0:00:06.868 ******** 2026-01-09 01:22:59.719373 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-01-09 01:22:59.719407 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-01-09 01:22:59.719441 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.719470 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-01-09 01:22:59.719495 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-01-09 01:22:59.719519 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:22:59.719543 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-01-09 01:22:59.719568 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-01-09 01:22:59.719592 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:22:59.719615 | orchestrator | 2026-01-09 01:22:59.719640 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-01-09 01:22:59.719663 | orchestrator | Friday 09 January 2026 01:22:47 +0000 (0:00:00.322) 0:00:07.190 ******** 2026-01-09 01:22:59.719724 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.719750 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:59.719773 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:59.719795 | orchestrator | 2026-01-09 01:22:59.719817 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-09 01:22:59.719839 | orchestrator | Friday 09 January 2026 01:22:47 +0000 (0:00:00.542) 0:00:07.733 ******** 2026-01-09 01:22:59.719861 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.719914 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:22:59.719933 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:22:59.719949 | orchestrator | 2026-01-09 01:22:59.719964 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-09 01:22:59.719978 | orchestrator | Friday 09 January 2026 01:22:48 +0000 (0:00:00.308) 0:00:08.042 ******** 2026-01-09 01:22:59.719994 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.720009 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:22:59.720024 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:22:59.720038 | orchestrator | 2026-01-09 01:22:59.720053 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-01-09 01:22:59.720069 | orchestrator | Friday 09 January 2026 01:22:48 +0000 (0:00:00.327) 0:00:08.369 ******** 2026-01-09 01:22:59.720084 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.720101 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:59.720117 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:59.720263 | orchestrator | 2026-01-09 01:22:59.720277 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-09 01:22:59.720287 | orchestrator | Friday 09 January 2026 01:22:48 +0000 (0:00:00.317) 0:00:08.686 ******** 2026-01-09 01:22:59.720297 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.720307 | orchestrator | 2026-01-09 01:22:59.720317 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-09 01:22:59.720327 | orchestrator | Friday 09 January 2026 01:22:49 +0000 (0:00:00.719) 0:00:09.406 ******** 2026-01-09 01:22:59.720336 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.720345 | orchestrator | 2026-01-09 01:22:59.720355 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-09 01:22:59.720364 | orchestrator | Friday 09 January 2026 01:22:49 +0000 (0:00:00.240) 0:00:09.646 ******** 2026-01-09 01:22:59.720374 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.720383 | orchestrator | 2026-01-09 01:22:59.720393 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:22:59.720403 | orchestrator | Friday 09 January 2026 01:22:49 +0000 (0:00:00.240) 0:00:09.887 ******** 2026-01-09 01:22:59.720412 | orchestrator | 2026-01-09 01:22:59.720421 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:22:59.720431 | orchestrator | Friday 09 January 2026 01:22:50 +0000 (0:00:00.073) 0:00:09.960 ******** 2026-01-09 01:22:59.720441 | orchestrator | 2026-01-09 01:22:59.720451 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:22:59.720488 | orchestrator | Friday 09 January 2026 01:22:50 +0000 (0:00:00.070) 0:00:10.031 ******** 2026-01-09 01:22:59.720498 | orchestrator | 2026-01-09 01:22:59.720508 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-09 01:22:59.720518 | orchestrator | Friday 09 January 2026 01:22:50 +0000 (0:00:00.071) 0:00:10.103 ******** 2026-01-09 01:22:59.720527 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.720537 | orchestrator | 2026-01-09 01:22:59.720546 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-01-09 01:22:59.720556 | orchestrator | Friday 09 January 2026 01:22:50 +0000 (0:00:00.269) 0:00:10.372 ******** 2026-01-09 01:22:59.720565 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.720575 | orchestrator | 2026-01-09 01:22:59.720591 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-09 01:22:59.720615 | orchestrator | Friday 09 January 2026 01:22:50 +0000 (0:00:00.262) 0:00:10.635 ******** 2026-01-09 01:22:59.720655 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.720672 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:59.720688 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:59.720704 | orchestrator | 2026-01-09 01:22:59.720772 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-01-09 01:22:59.720791 | orchestrator | Friday 09 January 2026 01:22:51 +0000 (0:00:00.300) 0:00:10.935 ******** 2026-01-09 01:22:59.720807 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.720823 | orchestrator | 2026-01-09 01:22:59.720839 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-01-09 01:22:59.720856 | orchestrator | Friday 09 January 2026 01:22:51 +0000 (0:00:00.234) 0:00:11.170 ******** 2026-01-09 01:22:59.720873 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-09 01:22:59.720888 | orchestrator | 2026-01-09 01:22:59.720905 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-01-09 01:22:59.720916 | orchestrator | Friday 09 January 2026 01:22:53 +0000 (0:00:02.136) 0:00:13.307 ******** 2026-01-09 01:22:59.720926 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.720935 | orchestrator | 2026-01-09 01:22:59.720945 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-01-09 01:22:59.720954 | orchestrator | Friday 09 January 2026 01:22:53 +0000 (0:00:00.144) 0:00:13.452 ******** 2026-01-09 01:22:59.720964 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.720974 | orchestrator | 2026-01-09 01:22:59.720984 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-01-09 01:22:59.720993 | orchestrator | Friday 09 January 2026 01:22:53 +0000 (0:00:00.334) 0:00:13.786 ******** 2026-01-09 01:22:59.721003 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.721012 | orchestrator | 2026-01-09 01:22:59.721022 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-01-09 01:22:59.721031 | orchestrator | Friday 09 January 2026 01:22:54 +0000 (0:00:00.142) 0:00:13.929 ******** 2026-01-09 01:22:59.721041 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.721054 | orchestrator | 2026-01-09 01:22:59.721070 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-09 01:22:59.721089 | orchestrator | Friday 09 January 2026 01:22:54 +0000 (0:00:00.145) 0:00:14.075 ******** 2026-01-09 01:22:59.721111 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.721152 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:59.721169 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:59.721185 | orchestrator | 2026-01-09 01:22:59.721200 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-01-09 01:22:59.721215 | orchestrator | Friday 09 January 2026 01:22:54 +0000 (0:00:00.313) 0:00:14.389 ******** 2026-01-09 01:22:59.721231 | orchestrator | changed: [testbed-node-3] 2026-01-09 01:22:59.721246 | orchestrator | changed: [testbed-node-4] 2026-01-09 01:22:59.721262 | orchestrator | changed: [testbed-node-5] 2026-01-09 01:22:59.721276 | orchestrator | 2026-01-09 01:22:59.721292 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-01-09 01:22:59.721307 | orchestrator | Friday 09 January 2026 01:22:57 +0000 (0:00:02.678) 0:00:17.067 ******** 2026-01-09 01:22:59.721323 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.721338 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:59.721353 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:59.721368 | orchestrator | 2026-01-09 01:22:59.721385 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-01-09 01:22:59.721402 | orchestrator | Friday 09 January 2026 01:22:57 +0000 (0:00:00.529) 0:00:17.597 ******** 2026-01-09 01:22:59.721419 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.721436 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:59.721452 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:59.721468 | orchestrator | 2026-01-09 01:22:59.721491 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-01-09 01:22:59.721511 | orchestrator | Friday 09 January 2026 01:22:58 +0000 (0:00:00.513) 0:00:18.111 ******** 2026-01-09 01:22:59.721542 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.721559 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:22:59.721574 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:22:59.721591 | orchestrator | 2026-01-09 01:22:59.721609 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-01-09 01:22:59.721618 | orchestrator | Friday 09 January 2026 01:22:58 +0000 (0:00:00.317) 0:00:18.428 ******** 2026-01-09 01:22:59.721628 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:22:59.721637 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:22:59.721647 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:22:59.721656 | orchestrator | 2026-01-09 01:22:59.721666 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-01-09 01:22:59.721676 | orchestrator | Friday 09 January 2026 01:22:59 +0000 (0:00:00.515) 0:00:18.944 ******** 2026-01-09 01:22:59.721685 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.721695 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:22:59.721704 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:22:59.721714 | orchestrator | 2026-01-09 01:22:59.721723 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-01-09 01:22:59.721733 | orchestrator | Friday 09 January 2026 01:22:59 +0000 (0:00:00.328) 0:00:19.273 ******** 2026-01-09 01:22:59.721742 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:22:59.721752 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:22:59.721761 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:22:59.721771 | orchestrator | 2026-01-09 01:22:59.721794 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-09 01:23:08.185229 | orchestrator | Friday 09 January 2026 01:22:59 +0000 (0:00:00.342) 0:00:19.615 ******** 2026-01-09 01:23:08.185346 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:23:08.185357 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:23:08.185364 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:23:08.185372 | orchestrator | 2026-01-09 01:23:08.185379 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-01-09 01:23:08.185387 | orchestrator | Friday 09 January 2026 01:23:00 +0000 (0:00:00.540) 0:00:20.156 ******** 2026-01-09 01:23:08.185393 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:23:08.185400 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:23:08.185407 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:23:08.185413 | orchestrator | 2026-01-09 01:23:08.185421 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-01-09 01:23:08.185428 | orchestrator | Friday 09 January 2026 01:23:01 +0000 (0:00:00.965) 0:00:21.121 ******** 2026-01-09 01:23:08.185434 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:23:08.185441 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:23:08.185448 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:23:08.185454 | orchestrator | 2026-01-09 01:23:08.185461 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-01-09 01:23:08.185467 | orchestrator | Friday 09 January 2026 01:23:01 +0000 (0:00:00.316) 0:00:21.437 ******** 2026-01-09 01:23:08.185474 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:23:08.185481 | orchestrator | skipping: [testbed-node-4] 2026-01-09 01:23:08.185488 | orchestrator | skipping: [testbed-node-5] 2026-01-09 01:23:08.185495 | orchestrator | 2026-01-09 01:23:08.185501 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-01-09 01:23:08.185508 | orchestrator | Friday 09 January 2026 01:23:01 +0000 (0:00:00.311) 0:00:21.749 ******** 2026-01-09 01:23:08.185514 | orchestrator | ok: [testbed-node-3] 2026-01-09 01:23:08.185520 | orchestrator | ok: [testbed-node-4] 2026-01-09 01:23:08.185527 | orchestrator | ok: [testbed-node-5] 2026-01-09 01:23:08.185533 | orchestrator | 2026-01-09 01:23:08.185538 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-09 01:23:08.185545 | orchestrator | Friday 09 January 2026 01:23:02 +0000 (0:00:00.321) 0:00:22.070 ******** 2026-01-09 01:23:08.185550 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-09 01:23:08.185556 | orchestrator | 2026-01-09 01:23:08.185585 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-09 01:23:08.185591 | orchestrator | Friday 09 January 2026 01:23:02 +0000 (0:00:00.512) 0:00:22.583 ******** 2026-01-09 01:23:08.185598 | orchestrator | skipping: [testbed-node-3] 2026-01-09 01:23:08.185604 | orchestrator | 2026-01-09 01:23:08.185610 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-09 01:23:08.185617 | orchestrator | Friday 09 January 2026 01:23:03 +0000 (0:00:00.760) 0:00:23.344 ******** 2026-01-09 01:23:08.185624 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-09 01:23:08.185630 | orchestrator | 2026-01-09 01:23:08.185637 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-09 01:23:08.185643 | orchestrator | Friday 09 January 2026 01:23:05 +0000 (0:00:01.669) 0:00:25.013 ******** 2026-01-09 01:23:08.185650 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-09 01:23:08.185656 | orchestrator | 2026-01-09 01:23:08.185662 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-09 01:23:08.185669 | orchestrator | Friday 09 January 2026 01:23:05 +0000 (0:00:00.287) 0:00:25.300 ******** 2026-01-09 01:23:08.185675 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-09 01:23:08.185682 | orchestrator | 2026-01-09 01:23:08.185688 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:23:08.185695 | orchestrator | Friday 09 January 2026 01:23:05 +0000 (0:00:00.280) 0:00:25.581 ******** 2026-01-09 01:23:08.185701 | orchestrator | 2026-01-09 01:23:08.185708 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:23:08.185714 | orchestrator | Friday 09 January 2026 01:23:05 +0000 (0:00:00.071) 0:00:25.653 ******** 2026-01-09 01:23:08.185721 | orchestrator | 2026-01-09 01:23:08.185728 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-09 01:23:08.185734 | orchestrator | Friday 09 January 2026 01:23:05 +0000 (0:00:00.070) 0:00:25.724 ******** 2026-01-09 01:23:08.185741 | orchestrator | 2026-01-09 01:23:08.185748 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-09 01:23:08.185755 | orchestrator | Friday 09 January 2026 01:23:05 +0000 (0:00:00.071) 0:00:25.796 ******** 2026-01-09 01:23:08.185762 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-09 01:23:08.185768 | orchestrator | 2026-01-09 01:23:08.185775 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-09 01:23:08.185782 | orchestrator | Friday 09 January 2026 01:23:07 +0000 (0:00:01.287) 0:00:27.084 ******** 2026-01-09 01:23:08.185802 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-01-09 01:23:08.185809 | orchestrator |  "msg": [ 2026-01-09 01:23:08.185817 | orchestrator |  "Validator run completed.", 2026-01-09 01:23:08.185824 | orchestrator |  "You can find the report file here:", 2026-01-09 01:23:08.185831 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-01-09T01:22:41+00:00-report.json", 2026-01-09 01:23:08.185838 | orchestrator |  "on the following host:", 2026-01-09 01:23:08.185846 | orchestrator |  "testbed-manager" 2026-01-09 01:23:08.185853 | orchestrator |  ] 2026-01-09 01:23:08.185860 | orchestrator | } 2026-01-09 01:23:08.185867 | orchestrator | 2026-01-09 01:23:08.185874 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:23:08.185883 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-09 01:23:08.185891 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-09 01:23:08.185915 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-09 01:23:08.185922 | orchestrator | 2026-01-09 01:23:08.185936 | orchestrator | 2026-01-09 01:23:08.185943 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:23:08.185950 | orchestrator | Friday 09 January 2026 01:23:07 +0000 (0:00:00.455) 0:00:27.539 ******** 2026-01-09 01:23:08.185956 | orchestrator | =============================================================================== 2026-01-09 01:23:08.185963 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.68s 2026-01-09 01:23:08.185969 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.14s 2026-01-09 01:23:08.185976 | orchestrator | Aggregate test results step one ----------------------------------------- 1.67s 2026-01-09 01:23:08.185982 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2026-01-09 01:23:08.185989 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.97s 2026-01-09 01:23:08.185995 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-01-09 01:23:08.186002 | orchestrator | Create report output directory ------------------------------------------ 0.77s 2026-01-09 01:23:08.186008 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.76s 2026-01-09 01:23:08.186069 | orchestrator | Aggregate test results step one ----------------------------------------- 0.72s 2026-01-09 01:23:08.186078 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.62s 2026-01-09 01:23:08.186085 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.54s 2026-01-09 01:23:08.186091 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.54s 2026-01-09 01:23:08.186098 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2026-01-09 01:23:08.186103 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.53s 2026-01-09 01:23:08.186110 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.53s 2026-01-09 01:23:08.186116 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.52s 2026-01-09 01:23:08.186139 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2026-01-09 01:23:08.186145 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.51s 2026-01-09 01:23:08.186151 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-01-09 01:23:08.186158 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.50s 2026-01-09 01:23:08.523862 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-01-09 01:23:08.532933 | orchestrator | + set -e 2026-01-09 01:23:08.533038 | orchestrator | + source /opt/manager-vars.sh 2026-01-09 01:23:08.533053 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-09 01:23:08.533068 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-09 01:23:08.533100 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-09 01:23:08.533117 | orchestrator | ++ CEPH_VERSION=reef 2026-01-09 01:23:08.533181 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-09 01:23:08.533193 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-09 01:23:08.533203 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-09 01:23:08.533218 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-09 01:23:08.533231 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-09 01:23:08.533241 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-09 01:23:08.533250 | orchestrator | ++ export ARA=false 2026-01-09 01:23:08.533261 | orchestrator | ++ ARA=false 2026-01-09 01:23:08.533273 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-09 01:23:08.533283 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-09 01:23:08.533294 | orchestrator | ++ export TEMPEST=true 2026-01-09 01:23:08.533305 | orchestrator | ++ TEMPEST=true 2026-01-09 01:23:08.533317 | orchestrator | ++ export IS_ZUUL=true 2026-01-09 01:23:08.533328 | orchestrator | ++ IS_ZUUL=true 2026-01-09 01:23:08.533340 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.67 2026-01-09 01:23:08.533352 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.67 2026-01-09 01:23:08.533362 | orchestrator | ++ export EXTERNAL_API=false 2026-01-09 01:23:08.533374 | orchestrator | ++ EXTERNAL_API=false 2026-01-09 01:23:08.533385 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-09 01:23:08.533394 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-09 01:23:08.533405 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-09 01:23:08.533444 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-09 01:23:08.533456 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-09 01:23:08.533465 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-09 01:23:08.533474 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-09 01:23:08.533484 | orchestrator | + source /etc/os-release 2026-01-09 01:23:08.533494 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-01-09 01:23:08.533520 | orchestrator | ++ NAME=Ubuntu 2026-01-09 01:23:08.533528 | orchestrator | ++ VERSION_ID=24.04 2026-01-09 01:23:08.533536 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-01-09 01:23:08.533543 | orchestrator | ++ VERSION_CODENAME=noble 2026-01-09 01:23:08.533551 | orchestrator | ++ ID=ubuntu 2026-01-09 01:23:08.533558 | orchestrator | ++ ID_LIKE=debian 2026-01-09 01:23:08.533565 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-01-09 01:23:08.533571 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-01-09 01:23:08.533579 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-01-09 01:23:08.533598 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-01-09 01:23:08.533606 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-01-09 01:23:08.533613 | orchestrator | ++ LOGO=ubuntu-logo 2026-01-09 01:23:08.533620 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-01-09 01:23:08.533628 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-01-09 01:23:08.533636 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-09 01:23:08.564161 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-09 01:23:32.346502 | orchestrator | 2026-01-09 01:23:32.346587 | orchestrator | # Status of Elasticsearch 2026-01-09 01:23:32.346595 | orchestrator | 2026-01-09 01:23:32.346600 | orchestrator | + pushd /opt/configuration/contrib 2026-01-09 01:23:32.346607 | orchestrator | + echo 2026-01-09 01:23:32.346612 | orchestrator | + echo '# Status of Elasticsearch' 2026-01-09 01:23:32.346616 | orchestrator | + echo 2026-01-09 01:23:32.346621 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-01-09 01:23:32.538009 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-01-09 01:23:32.538255 | orchestrator | 2026-01-09 01:23:32.538267 | orchestrator | # Status of MariaDB 2026-01-09 01:23:32.538275 | orchestrator | 2026-01-09 01:23:32.538282 | orchestrator | + echo 2026-01-09 01:23:32.538290 | orchestrator | + echo '# Status of MariaDB' 2026-01-09 01:23:32.538296 | orchestrator | + echo 2026-01-09 01:23:32.539595 | orchestrator | ++ semver latest 10.0.0-0 2026-01-09 01:23:32.615486 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-09 01:23:32.615569 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-09 01:23:32.615577 | orchestrator | + osism status database 2026-01-09 01:23:34.646052 | orchestrator | 2026-01-09 01:23:34 | ERROR  | Unable to get ansible vault password 2026-01-09 01:23:34.646227 | orchestrator | 2026-01-09 01:23:34 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-09 01:23:34.646237 | orchestrator | 2026-01-09 01:23:34 | ERROR  | Dropping encrypted entries 2026-01-09 01:23:34.678661 | orchestrator | 2026-01-09 01:23:34 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-01-09 01:23:34.687767 | orchestrator | 2026-01-09 01:23:34 | INFO  | Cluster Status: Primary 2026-01-09 01:23:34.687864 | orchestrator | 2026-01-09 01:23:34 | INFO  | Connected: ON 2026-01-09 01:23:34.687873 | orchestrator | 2026-01-09 01:23:34 | INFO  | Ready: ON 2026-01-09 01:23:34.687880 | orchestrator | 2026-01-09 01:23:34 | INFO  | Cluster Size: 3 2026-01-09 01:23:34.687886 | orchestrator | 2026-01-09 01:23:34 | INFO  | Local State: Synced 2026-01-09 01:23:34.687893 | orchestrator | 2026-01-09 01:23:34 | INFO  | Cluster State UUID: 0cc38ad4-ecf6-11f0-a1d7-8aa252b1d0be 2026-01-09 01:23:34.687930 | orchestrator | 2026-01-09 01:23:34 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-01-09 01:23:34.687938 | orchestrator | 2026-01-09 01:23:34 | INFO  | Galera Version: 26.4.24(ra6b53429) 2026-01-09 01:23:34.688196 | orchestrator | 2026-01-09 01:23:34 | INFO  | Local Node UUID: 42725a7b-ecf6-11f0-9af1-5bd01b616268 2026-01-09 01:23:34.688214 | orchestrator | 2026-01-09 01:23:34 | INFO  | Flow Control Paused: 0.00% 2026-01-09 01:23:34.688221 | orchestrator | 2026-01-09 01:23:34 | INFO  | Recv Queue Avg: 0 2026-01-09 01:23:34.688227 | orchestrator | 2026-01-09 01:23:34 | INFO  | Send Queue Avg: 0.000734304 2026-01-09 01:23:34.688234 | orchestrator | 2026-01-09 01:23:34 | INFO  | Transactions: 5387 local commits, 8102 replicated, 105 received 2026-01-09 01:23:34.688490 | orchestrator | 2026-01-09 01:23:34 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-01-09 01:23:34.688509 | orchestrator | 2026-01-09 01:23:34 | INFO  | MariaDB Uptime: 25 minutes, 28 seconds 2026-01-09 01:23:34.688652 | orchestrator | 2026-01-09 01:23:34 | INFO  | Threads: 125 connected, 1 running 2026-01-09 01:23:34.688663 | orchestrator | 2026-01-09 01:23:34 | INFO  | Queries: 149072 total, 0 slow 2026-01-09 01:23:34.688671 | orchestrator | 2026-01-09 01:23:34 | INFO  | Aborted Connects: 51 2026-01-09 01:23:34.689173 | orchestrator | 2026-01-09 01:23:34 | INFO  | MariaDB Galera Cluster validation PASSED 2026-01-09 01:23:34.996903 | orchestrator | 2026-01-09 01:23:34.997029 | orchestrator | # Status of Prometheus 2026-01-09 01:23:34.997048 | orchestrator | 2026-01-09 01:23:34.997060 | orchestrator | + echo 2026-01-09 01:23:34.997072 | orchestrator | + echo '# Status of Prometheus' 2026-01-09 01:23:34.997084 | orchestrator | + echo 2026-01-09 01:23:34.997096 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-01-09 01:23:35.049194 | orchestrator | Unauthorized 2026-01-09 01:23:35.052690 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-01-09 01:23:35.123914 | orchestrator | Unauthorized 2026-01-09 01:23:35.126921 | orchestrator | 2026-01-09 01:23:35.127057 | orchestrator | # Status of RabbitMQ 2026-01-09 01:23:35.127075 | orchestrator | 2026-01-09 01:23:35.127085 | orchestrator | + echo 2026-01-09 01:23:35.127096 | orchestrator | + echo '# Status of RabbitMQ' 2026-01-09 01:23:35.127127 | orchestrator | + echo 2026-01-09 01:23:35.128001 | orchestrator | ++ semver latest 10.0.0-0 2026-01-09 01:23:35.186629 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-09 01:23:35.186738 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-09 01:23:35.186751 | orchestrator | + osism status messaging 2026-01-09 01:23:57.764073 | orchestrator | 2026-01-09 01:23:57 | ERROR  | Unable to get ansible vault password 2026-01-09 01:23:57.765047 | orchestrator | 2026-01-09 01:23:57 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-09 01:23:57.765119 | orchestrator | 2026-01-09 01:23:57 | ERROR  | Dropping encrypted entries 2026-01-09 01:23:57.798201 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-01-09 01:23:57.855941 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-01-09 01:23:57.856028 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-01-09 01:23:57.856061 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-01-09 01:23:57.856075 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Cluster Size: 3 2026-01-09 01:23:57.856692 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-09 01:23:57.856727 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-09 01:23:57.856754 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-01-09 01:23:57.856932 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Connections: 202, Channels: 201, Queues: 173 2026-01-09 01:23:57.857137 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Messages: 218 total, 218 ready, 0 unacked 2026-01-09 01:23:57.857780 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Message Rates: 6.2/s publish, 6.8/s deliver 2026-01-09 01:23:57.857832 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Disk Free: 58.5 GB (limit: 0.0 GB) 2026-01-09 01:23:57.858148 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-01-09 01:23:57.858732 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] File Descriptors: 116/1024 2026-01-09 01:23:57.859017 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-0] Sockets: 68/832 2026-01-09 01:23:57.859039 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-01-09 01:23:57.953614 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-01-09 01:23:57.953741 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-01-09 01:23:57.953749 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-01-09 01:23:57.953757 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Cluster Size: 3 2026-01-09 01:23:57.953764 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-09 01:23:57.953771 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-09 01:23:57.953777 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-01-09 01:23:57.953783 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Connections: 202, Channels: 201, Queues: 173 2026-01-09 01:23:57.953790 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Messages: 218 total, 218 ready, 0 unacked 2026-01-09 01:23:57.953795 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Message Rates: 6.2/s publish, 6.8/s deliver 2026-01-09 01:23:57.953801 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Disk Free: 58.9 GB (limit: 0.0 GB) 2026-01-09 01:23:57.953815 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-01-09 01:23:57.953821 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] File Descriptors: 110/1024 2026-01-09 01:23:57.953827 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-1] Sockets: 64/832 2026-01-09 01:23:57.954002 | orchestrator | 2026-01-09 01:23:57 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-01-09 01:23:58.025599 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-01-09 01:23:58.025684 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-01-09 01:23:58.025694 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-01-09 01:23:58.025701 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Cluster Size: 3 2026-01-09 01:23:58.025725 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-09 01:23:58.025751 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-09 01:23:58.025758 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-01-09 01:23:58.025764 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Connections: 202, Channels: 201, Queues: 173 2026-01-09 01:23:58.025771 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Messages: 218 total, 218 ready, 0 unacked 2026-01-09 01:23:58.025777 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Message Rates: 6.2/s publish, 6.8/s deliver 2026-01-09 01:23:58.025783 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Disk Free: 58.8 GB (limit: 0.0 GB) 2026-01-09 01:23:58.025789 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-01-09 01:23:58.025796 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] File Descriptors: 116/1024 2026-01-09 01:23:58.025812 | orchestrator | 2026-01-09 01:23:58 | INFO  | [testbed-node-2] Sockets: 70/832 2026-01-09 01:23:58.025819 | orchestrator | 2026-01-09 01:23:58 | INFO  | RabbitMQ Cluster validation PASSED 2026-01-09 01:23:58.387606 | orchestrator | 2026-01-09 01:23:58.387696 | orchestrator | # Status of Redis 2026-01-09 01:23:58.387707 | orchestrator | 2026-01-09 01:23:58.387715 | orchestrator | + echo 2026-01-09 01:23:58.387723 | orchestrator | + echo '# Status of Redis' 2026-01-09 01:23:58.387731 | orchestrator | + echo 2026-01-09 01:23:58.387740 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-01-09 01:23:58.396639 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002152s;;;0.000000;10.000000 2026-01-09 01:23:58.396723 | orchestrator | 2026-01-09 01:23:58.396733 | orchestrator | # Create backup of MariaDB database 2026-01-09 01:23:58.396741 | orchestrator | 2026-01-09 01:23:58.396749 | orchestrator | + popd 2026-01-09 01:23:58.396756 | orchestrator | + echo 2026-01-09 01:23:58.396764 | orchestrator | + echo '# Create backup of MariaDB database' 2026-01-09 01:23:58.396771 | orchestrator | + echo 2026-01-09 01:23:58.396779 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-01-09 01:24:00.636644 | orchestrator | 2026-01-09 01:24:00 | INFO  | Task f49d7fc1-8463-4e4a-97db-f16a114735b8 (mariadb_backup) was prepared for execution. 2026-01-09 01:24:00.636741 | orchestrator | 2026-01-09 01:24:00 | INFO  | It takes a moment until task f49d7fc1-8463-4e4a-97db-f16a114735b8 (mariadb_backup) has been started and output is visible here. 2026-01-09 01:24:28.424685 | orchestrator | 2026-01-09 01:24:28.424814 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-09 01:24:28.424836 | orchestrator | 2026-01-09 01:24:28.424849 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-09 01:24:28.424861 | orchestrator | Friday 09 January 2026 01:24:04 +0000 (0:00:00.188) 0:00:00.188 ******** 2026-01-09 01:24:28.424873 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:24:28.424886 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:24:28.424897 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:24:28.424908 | orchestrator | 2026-01-09 01:24:28.424919 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-09 01:24:28.424930 | orchestrator | Friday 09 January 2026 01:24:05 +0000 (0:00:00.372) 0:00:00.561 ******** 2026-01-09 01:24:28.424942 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-09 01:24:28.424954 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-09 01:24:28.424965 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-09 01:24:28.424976 | orchestrator | 2026-01-09 01:24:28.424988 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-09 01:24:28.425029 | orchestrator | 2026-01-09 01:24:28.425042 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-09 01:24:28.425052 | orchestrator | Friday 09 January 2026 01:24:05 +0000 (0:00:00.603) 0:00:01.164 ******** 2026-01-09 01:24:28.425063 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-09 01:24:28.425104 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-09 01:24:28.425118 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-09 01:24:28.425130 | orchestrator | 2026-01-09 01:24:28.425141 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-09 01:24:28.425152 | orchestrator | Friday 09 January 2026 01:24:06 +0000 (0:00:00.411) 0:00:01.576 ******** 2026-01-09 01:24:28.425165 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-09 01:24:28.425177 | orchestrator | 2026-01-09 01:24:28.425189 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-01-09 01:24:28.425201 | orchestrator | Friday 09 January 2026 01:24:06 +0000 (0:00:00.530) 0:00:02.106 ******** 2026-01-09 01:24:28.425214 | orchestrator | ok: [testbed-node-0] 2026-01-09 01:24:28.425224 | orchestrator | ok: [testbed-node-1] 2026-01-09 01:24:28.425235 | orchestrator | ok: [testbed-node-2] 2026-01-09 01:24:28.425247 | orchestrator | 2026-01-09 01:24:28.425258 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-01-09 01:24:28.425268 | orchestrator | Friday 09 January 2026 01:24:10 +0000 (0:00:03.402) 0:00:05.508 ******** 2026-01-09 01:24:28.425278 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-09 01:24:28.425289 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-01-09 01:24:28.425318 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-09 01:24:28.425331 | orchestrator | mariadb_bootstrap_restart 2026-01-09 01:24:28.425342 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:24:28.425353 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:24:28.425364 | orchestrator | changed: [testbed-node-0] 2026-01-09 01:24:28.425374 | orchestrator | 2026-01-09 01:24:28.425385 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-09 01:24:28.425396 | orchestrator | skipping: no hosts matched 2026-01-09 01:24:28.425407 | orchestrator | 2026-01-09 01:24:28.425417 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-09 01:24:28.425427 | orchestrator | skipping: no hosts matched 2026-01-09 01:24:28.425438 | orchestrator | 2026-01-09 01:24:28.425450 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-09 01:24:28.425460 | orchestrator | skipping: no hosts matched 2026-01-09 01:24:28.425471 | orchestrator | 2026-01-09 01:24:28.425480 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-09 01:24:28.425490 | orchestrator | 2026-01-09 01:24:28.425500 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-09 01:24:28.425510 | orchestrator | Friday 09 January 2026 01:24:27 +0000 (0:00:16.944) 0:00:22.453 ******** 2026-01-09 01:24:28.425522 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:24:28.425532 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:24:28.425544 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:24:28.425556 | orchestrator | 2026-01-09 01:24:28.425567 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-09 01:24:28.425577 | orchestrator | Friday 09 January 2026 01:24:27 +0000 (0:00:00.319) 0:00:22.773 ******** 2026-01-09 01:24:28.425588 | orchestrator | skipping: [testbed-node-0] 2026-01-09 01:24:28.425599 | orchestrator | skipping: [testbed-node-1] 2026-01-09 01:24:28.425610 | orchestrator | skipping: [testbed-node-2] 2026-01-09 01:24:28.425622 | orchestrator | 2026-01-09 01:24:28.425634 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:24:28.425648 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-09 01:24:28.425676 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-09 01:24:28.425688 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-09 01:24:28.425699 | orchestrator | 2026-01-09 01:24:28.425711 | orchestrator | 2026-01-09 01:24:28.425721 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:24:28.425732 | orchestrator | Friday 09 January 2026 01:24:28 +0000 (0:00:00.469) 0:00:23.242 ******** 2026-01-09 01:24:28.425743 | orchestrator | =============================================================================== 2026-01-09 01:24:28.425754 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 16.94s 2026-01-09 01:24:28.425789 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.40s 2026-01-09 01:24:28.425797 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-01-09 01:24:28.425804 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.53s 2026-01-09 01:24:28.425811 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.47s 2026-01-09 01:24:28.425818 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2026-01-09 01:24:28.425824 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-01-09 01:24:28.425831 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-01-09 01:24:28.778303 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-01-09 01:24:28.788845 | orchestrator | + set -e 2026-01-09 01:24:28.789039 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-09 01:24:28.789610 | orchestrator | ++ export INTERACTIVE=false 2026-01-09 01:24:28.790552 | orchestrator | ++ INTERACTIVE=false 2026-01-09 01:24:28.790602 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-09 01:24:28.790612 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-09 01:24:28.790621 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-09 01:24:28.791699 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-09 01:24:28.798195 | orchestrator | 2026-01-09 01:24:28.798277 | orchestrator | # OpenStack endpoints 2026-01-09 01:24:28.798287 | orchestrator | 2026-01-09 01:24:28.798295 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-09 01:24:28.798303 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-09 01:24:28.798310 | orchestrator | + export OS_CLOUD=admin 2026-01-09 01:24:28.798317 | orchestrator | + OS_CLOUD=admin 2026-01-09 01:24:28.798325 | orchestrator | + echo 2026-01-09 01:24:28.798332 | orchestrator | + echo '# OpenStack endpoints' 2026-01-09 01:24:28.798339 | orchestrator | + echo 2026-01-09 01:24:28.798345 | orchestrator | + openstack endpoint list 2026-01-09 01:24:32.213961 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-09 01:24:32.214138 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-01-09 01:24:32.214149 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-09 01:24:32.214154 | orchestrator | | 1dfba2ab180047bab37152dc1935905a | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-01-09 01:24:32.214160 | orchestrator | | 2511e80956e7407bbd851f41d35f693a | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-01-09 01:24:32.214178 | orchestrator | | 251e8403cb2d46a0a7d843590cb23e43 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-01-09 01:24:32.214200 | orchestrator | | 27ea4b4cd9d84b068ad40af52ea12184 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-01-09 01:24:32.214205 | orchestrator | | 31610f18275e428e993401d0cdc56b60 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-01-09 01:24:32.214211 | orchestrator | | 328967d651ae4c6c9f544b8765031ebc | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-09 01:24:32.214217 | orchestrator | | 331503ef453846198b3838cf5e936d42 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-09 01:24:32.214222 | orchestrator | | 4500cd615c93443b8296845846c4f7e8 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-01-09 01:24:32.214227 | orchestrator | | 4aac1f7c225d4db0b5ae8bc619a7fa43 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-09 01:24:32.214232 | orchestrator | | 5ae73d24ad954e8d8c8edf005dea434a | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-01-09 01:24:32.214237 | orchestrator | | 5d539b8ce56b4feab145017c1c1c9609 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-01-09 01:24:32.214242 | orchestrator | | 73a1b7961c67412f8979c97e53e38061 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-09 01:24:32.214247 | orchestrator | | 851840d4b6004087ae71befb95fefade | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-01-09 01:24:32.214252 | orchestrator | | 8f999492d67241cba783cd201a0d9485 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-01-09 01:24:32.214257 | orchestrator | | 96d4df47ca234104b278ff97b31f4b5f | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-01-09 01:24:32.214262 | orchestrator | | b2063612512f49ed8c7e0a88033fb885 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-01-09 01:24:32.214267 | orchestrator | | b4ec02d46e1948ccb494cb0e24910874 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-01-09 01:24:32.214273 | orchestrator | | c453f7a3dd1c4ea2b4209483d0d7034d | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-01-09 01:24:32.214278 | orchestrator | | d44a614dc329493eb81ee962669ba5a6 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-01-09 01:24:32.214283 | orchestrator | | e81e5bfd8283427599c077bf8cca95ff | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-01-09 01:24:32.214300 | orchestrator | | ee86a5eabdc54452804542b8b318e345 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-01-09 01:24:32.214306 | orchestrator | | fa83b6c48e0a4c02bd0f94f8b39a1d05 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-01-09 01:24:32.214311 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-09 01:24:32.466936 | orchestrator | 2026-01-09 01:24:32.467037 | orchestrator | # Cinder 2026-01-09 01:24:32.467046 | orchestrator | 2026-01-09 01:24:32.467052 | orchestrator | + echo 2026-01-09 01:24:32.467058 | orchestrator | + echo '# Cinder' 2026-01-09 01:24:32.467063 | orchestrator | + echo 2026-01-09 01:24:32.467069 | orchestrator | + openstack volume service list 2026-01-09 01:24:36.395763 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-09 01:24:36.395859 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-01-09 01:24:36.395868 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-09 01:24:36.395876 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-09T01:24:30.000000 | 2026-01-09 01:24:36.395882 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-09T01:24:30.000000 | 2026-01-09 01:24:36.395888 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-09T01:24:31.000000 | 2026-01-09 01:24:36.395894 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-01-09T01:24:30.000000 | 2026-01-09 01:24:36.395913 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-01-09T01:24:27.000000 | 2026-01-09 01:24:36.395919 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-01-09T01:24:27.000000 | 2026-01-09 01:24:36.395932 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-01-09T01:24:34.000000 | 2026-01-09 01:24:36.395938 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-01-09T01:24:35.000000 | 2026-01-09 01:24:36.395944 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-01-09T01:24:27.000000 | 2026-01-09 01:24:36.395950 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-09 01:24:36.693575 | orchestrator | 2026-01-09 01:24:36.693673 | orchestrator | # Neutron 2026-01-09 01:24:36.693686 | orchestrator | 2026-01-09 01:24:36.693696 | orchestrator | + echo 2026-01-09 01:24:36.693705 | orchestrator | + echo '# Neutron' 2026-01-09 01:24:36.693715 | orchestrator | + echo 2026-01-09 01:24:36.693724 | orchestrator | + openstack network agent list 2026-01-09 01:24:39.552671 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-09 01:24:39.552754 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-01-09 01:24:39.552763 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-09 01:24:39.552769 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-01-09 01:24:39.552774 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-01-09 01:24:39.552779 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-01-09 01:24:39.552784 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-01-09 01:24:39.552790 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-01-09 01:24:39.552796 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-01-09 01:24:39.552801 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-09 01:24:39.552823 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-09 01:24:39.552828 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-09 01:24:39.552834 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-09 01:24:39.821200 | orchestrator | + openstack network service provider list 2026-01-09 01:24:42.467427 | orchestrator | +---------------+------+---------+ 2026-01-09 01:24:42.467531 | orchestrator | | Service Type | Name | Default | 2026-01-09 01:24:42.467541 | orchestrator | +---------------+------+---------+ 2026-01-09 01:24:42.467549 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-01-09 01:24:42.467557 | orchestrator | +---------------+------+---------+ 2026-01-09 01:24:42.730544 | orchestrator | 2026-01-09 01:24:42.730641 | orchestrator | # Nova 2026-01-09 01:24:42.730655 | orchestrator | 2026-01-09 01:24:42.730663 | orchestrator | + echo 2026-01-09 01:24:42.730671 | orchestrator | + echo '# Nova' 2026-01-09 01:24:42.730678 | orchestrator | + echo 2026-01-09 01:24:42.730687 | orchestrator | + openstack compute service list 2026-01-09 01:24:45.509619 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-09 01:24:45.509726 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-01-09 01:24:45.509736 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-09 01:24:45.509743 | orchestrator | | 39a25cba-7d0d-46d8-b8d0-e46e0af7a709 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-01-09T01:24:43.000000 | 2026-01-09 01:24:45.509751 | orchestrator | | bd241b51-de46-45e6-8822-f7396e1ed7d7 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-01-09T01:24:44.000000 | 2026-01-09 01:24:45.509777 | orchestrator | | 2bd97711-6f64-427c-bffd-1f482dc75a52 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-01-09T01:24:35.000000 | 2026-01-09 01:24:45.509784 | orchestrator | | 691e9ef0-1e31-45a5-96eb-3d95ff3ee373 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-01-09T01:24:35.000000 | 2026-01-09 01:24:45.509790 | orchestrator | | 733c99a8-28a5-4b3a-9d10-30681c544dd1 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-01-09T01:24:36.000000 | 2026-01-09 01:24:45.509797 | orchestrator | | befd6335-f7f0-42b2-8ea2-7adb0683a33b | nova-compute | testbed-node-5 | nova | enabled | up | 2026-01-09T01:24:36.000000 | 2026-01-09 01:24:45.509803 | orchestrator | | c8b5ab51-4780-40fe-9481-cb64db724e4e | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-09T01:24:37.000000 | 2026-01-09 01:24:45.509809 | orchestrator | | 4638cd4f-b0f7-49b6-a96c-6501aa62c879 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-09T01:24:41.000000 | 2026-01-09 01:24:45.509814 | orchestrator | | f40d4a35-b397-4eac-a68a-259c212b9d20 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-09T01:24:41.000000 | 2026-01-09 01:24:45.509818 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-09 01:24:45.778193 | orchestrator | + openstack hypervisor list 2026-01-09 01:24:48.391658 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-09 01:24:48.391745 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-01-09 01:24:48.391751 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-09 01:24:48.391755 | orchestrator | | 05cdd987-bfe5-45ae-8818-ce504e318c46 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-01-09 01:24:48.391759 | orchestrator | | 5bd501e6-d793-4209-8657-2cd41182158d | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-01-09 01:24:48.391782 | orchestrator | | 7a2781f7-f1c4-4318-b5fb-e39e5efdb28b | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-01-09 01:24:48.391787 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-09 01:24:48.654107 | orchestrator | 2026-01-09 01:24:48.654212 | orchestrator | # Run OpenStack test play 2026-01-09 01:24:48.654229 | orchestrator | 2026-01-09 01:24:48.654240 | orchestrator | + echo 2026-01-09 01:24:48.654250 | orchestrator | + echo '# Run OpenStack test play' 2026-01-09 01:24:48.654261 | orchestrator | + echo 2026-01-09 01:24:48.654271 | orchestrator | + osism apply --environment openstack test 2026-01-09 01:24:50.722858 | orchestrator | 2026-01-09 01:24:50 | INFO  | Trying to run play test in environment openstack 2026-01-09 01:25:00.829442 | orchestrator | 2026-01-09 01:25:00 | INFO  | Task 906bbedf-f00b-46b6-85fe-2eb7d3826f43 (test) was prepared for execution. 2026-01-09 01:25:00.829553 | orchestrator | 2026-01-09 01:25:00 | INFO  | It takes a moment until task 906bbedf-f00b-46b6-85fe-2eb7d3826f43 (test) has been started and output is visible here. 2026-01-09 01:32:25.031306 | orchestrator | 2026-01-09 01:32:25.031400 | orchestrator | PLAY [Create test project] ***************************************************** 2026-01-09 01:32:25.031412 | orchestrator | 2026-01-09 01:32:25.031417 | orchestrator | TASK [Create test domain] ****************************************************** 2026-01-09 01:32:25.031422 | orchestrator | Friday 09 January 2026 01:25:05 +0000 (0:00:00.070) 0:00:00.070 ******** 2026-01-09 01:32:25.031427 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031431 | orchestrator | 2026-01-09 01:32:25.031436 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-01-09 01:32:25.031440 | orchestrator | Friday 09 January 2026 01:25:08 +0000 (0:00:03.644) 0:00:03.714 ******** 2026-01-09 01:32:25.031444 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031448 | orchestrator | 2026-01-09 01:32:25.031452 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-01-09 01:32:25.031456 | orchestrator | Friday 09 January 2026 01:25:13 +0000 (0:00:04.329) 0:00:08.044 ******** 2026-01-09 01:32:25.031460 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031464 | orchestrator | 2026-01-09 01:32:25.031468 | orchestrator | TASK [Create test project] ***************************************************** 2026-01-09 01:32:25.031472 | orchestrator | Friday 09 January 2026 01:25:19 +0000 (0:00:06.676) 0:00:14.720 ******** 2026-01-09 01:32:25.031476 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031481 | orchestrator | 2026-01-09 01:32:25.031487 | orchestrator | TASK [Create test user] ******************************************************** 2026-01-09 01:32:25.031493 | orchestrator | Friday 09 January 2026 01:25:23 +0000 (0:00:03.995) 0:00:18.716 ******** 2026-01-09 01:32:25.031499 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031505 | orchestrator | 2026-01-09 01:32:25.031513 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-01-09 01:32:25.031522 | orchestrator | Friday 09 January 2026 01:25:28 +0000 (0:00:04.243) 0:00:22.959 ******** 2026-01-09 01:32:25.031530 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-01-09 01:32:25.031537 | orchestrator | changed: [localhost] => (item=member) 2026-01-09 01:32:25.031544 | orchestrator | changed: [localhost] => (item=creator) 2026-01-09 01:32:25.031550 | orchestrator | 2026-01-09 01:32:25.031557 | orchestrator | TASK [Create test server group] ************************************************ 2026-01-09 01:32:25.031563 | orchestrator | Friday 09 January 2026 01:25:39 +0000 (0:00:11.482) 0:00:34.442 ******** 2026-01-09 01:32:25.031570 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031577 | orchestrator | 2026-01-09 01:32:25.031583 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-01-09 01:32:25.031591 | orchestrator | Friday 09 January 2026 01:25:43 +0000 (0:00:04.363) 0:00:38.805 ******** 2026-01-09 01:32:25.031598 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031605 | orchestrator | 2026-01-09 01:32:25.031625 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-01-09 01:32:25.031629 | orchestrator | Friday 09 January 2026 01:25:48 +0000 (0:00:04.905) 0:00:43.711 ******** 2026-01-09 01:32:25.031648 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031652 | orchestrator | 2026-01-09 01:32:25.031656 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-01-09 01:32:25.031660 | orchestrator | Friday 09 January 2026 01:25:52 +0000 (0:00:04.194) 0:00:47.905 ******** 2026-01-09 01:32:25.031663 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031667 | orchestrator | 2026-01-09 01:32:25.031671 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-01-09 01:32:25.031674 | orchestrator | Friday 09 January 2026 01:25:56 +0000 (0:00:03.915) 0:00:51.821 ******** 2026-01-09 01:32:25.031678 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031682 | orchestrator | 2026-01-09 01:32:25.031686 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-01-09 01:32:25.031689 | orchestrator | Friday 09 January 2026 01:26:00 +0000 (0:00:03.975) 0:00:55.797 ******** 2026-01-09 01:32:25.031693 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031697 | orchestrator | 2026-01-09 01:32:25.031701 | orchestrator | TASK [Create test network] ***************************************************** 2026-01-09 01:32:25.031704 | orchestrator | Friday 09 January 2026 01:26:04 +0000 (0:00:03.910) 0:00:59.708 ******** 2026-01-09 01:32:25.031708 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031712 | orchestrator | 2026-01-09 01:32:25.031715 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-01-09 01:32:25.031719 | orchestrator | Friday 09 January 2026 01:26:09 +0000 (0:00:04.667) 0:01:04.375 ******** 2026-01-09 01:32:25.031723 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031726 | orchestrator | 2026-01-09 01:32:25.031730 | orchestrator | TASK [Create test router] ****************************************************** 2026-01-09 01:32:25.031736 | orchestrator | Friday 09 January 2026 01:26:15 +0000 (0:00:05.601) 0:01:09.976 ******** 2026-01-09 01:32:25.031742 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031751 | orchestrator | 2026-01-09 01:32:25.031760 | orchestrator | TASK [Create test instances] *************************************************** 2026-01-09 01:32:25.031766 | orchestrator | Friday 09 January 2026 01:26:25 +0000 (0:00:10.798) 0:01:20.775 ******** 2026-01-09 01:32:25.031771 | orchestrator | changed: [localhost] => (item=test) 2026-01-09 01:32:25.031777 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-09 01:32:25.031783 | orchestrator | 2026-01-09 01:32:25.031789 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-09 01:32:25.031795 | orchestrator | 2026-01-09 01:32:25.031802 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-09 01:32:25.031808 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-09 01:32:25.031814 | orchestrator | 2026-01-09 01:32:25.031820 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-09 01:32:25.031827 | orchestrator | 2026-01-09 01:32:25.031833 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-09 01:32:25.031839 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-09 01:32:25.031844 | orchestrator | 2026-01-09 01:32:25.031848 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-09 01:32:25.031853 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-09 01:32:25.031857 | orchestrator | 2026-01-09 01:32:25.031862 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-01-09 01:32:25.031879 | orchestrator | Friday 09 January 2026 01:30:59 +0000 (0:04:34.032) 0:05:54.808 ******** 2026-01-09 01:32:25.031884 | orchestrator | changed: [localhost] => (item=test) 2026-01-09 01:32:25.031889 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-09 01:32:25.031893 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-09 01:32:25.031898 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-09 01:32:25.031902 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-09 01:32:25.031907 | orchestrator | 2026-01-09 01:32:25.031911 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-01-09 01:32:25.031921 | orchestrator | Friday 09 January 2026 01:31:23 +0000 (0:00:23.923) 0:06:18.732 ******** 2026-01-09 01:32:25.031925 | orchestrator | changed: [localhost] => (item=test) 2026-01-09 01:32:25.031930 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-09 01:32:25.031934 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-09 01:32:25.031939 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-09 01:32:25.031943 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-09 01:32:25.031948 | orchestrator | 2026-01-09 01:32:25.031952 | orchestrator | TASK [Create test volume] ****************************************************** 2026-01-09 01:32:25.031956 | orchestrator | Friday 09 January 2026 01:31:59 +0000 (0:00:35.204) 0:06:53.937 ******** 2026-01-09 01:32:25.031960 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031965 | orchestrator | 2026-01-09 01:32:25.031969 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-01-09 01:32:25.031974 | orchestrator | Friday 09 January 2026 01:32:05 +0000 (0:00:06.832) 0:07:00.769 ******** 2026-01-09 01:32:25.031978 | orchestrator | changed: [localhost] 2026-01-09 01:32:25.031983 | orchestrator | 2026-01-09 01:32:25.031987 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-01-09 01:32:25.031991 | orchestrator | Friday 09 January 2026 01:32:19 +0000 (0:00:13.684) 0:07:14.454 ******** 2026-01-09 01:32:25.031996 | orchestrator | ok: [localhost] 2026-01-09 01:32:25.032001 | orchestrator | 2026-01-09 01:32:25.032006 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-01-09 01:32:25.032010 | orchestrator | Friday 09 January 2026 01:32:24 +0000 (0:00:05.154) 0:07:19.608 ******** 2026-01-09 01:32:25.032014 | orchestrator | ok: [localhost] => { 2026-01-09 01:32:25.032019 | orchestrator |  "msg": "192.168.112.185" 2026-01-09 01:32:25.032024 | orchestrator | } 2026-01-09 01:32:25.032029 | orchestrator | 2026-01-09 01:32:25.032033 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:32:25.032038 | orchestrator | localhost : ok=22  changed=20  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-09 01:32:25.032045 | orchestrator | 2026-01-09 01:32:25.032049 | orchestrator | 2026-01-09 01:32:25.032059 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:32:25.032090 | orchestrator | Friday 09 January 2026 01:32:24 +0000 (0:00:00.044) 0:07:19.653 ******** 2026-01-09 01:32:25.032097 | orchestrator | =============================================================================== 2026-01-09 01:32:25.032103 | orchestrator | Create test instances ------------------------------------------------- 274.03s 2026-01-09 01:32:25.032109 | orchestrator | Add tag to instances --------------------------------------------------- 35.20s 2026-01-09 01:32:25.032114 | orchestrator | Add metadata to instances ---------------------------------------------- 23.92s 2026-01-09 01:32:25.032120 | orchestrator | Attach test volume ----------------------------------------------------- 13.68s 2026-01-09 01:32:25.032127 | orchestrator | Add member roles to user test ------------------------------------------ 11.48s 2026-01-09 01:32:25.032133 | orchestrator | Create test router ----------------------------------------------------- 10.80s 2026-01-09 01:32:25.032140 | orchestrator | Create test volume ------------------------------------------------------ 6.83s 2026-01-09 01:32:25.032147 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.68s 2026-01-09 01:32:25.032153 | orchestrator | Create test subnet ------------------------------------------------------ 5.60s 2026-01-09 01:32:25.032160 | orchestrator | Create floating ip address ---------------------------------------------- 5.15s 2026-01-09 01:32:25.032166 | orchestrator | Create ssh security group ----------------------------------------------- 4.91s 2026-01-09 01:32:25.032173 | orchestrator | Create test network ----------------------------------------------------- 4.67s 2026-01-09 01:32:25.032177 | orchestrator | Create test server group ------------------------------------------------ 4.36s 2026-01-09 01:32:25.032182 | orchestrator | Create test-admin user -------------------------------------------------- 4.33s 2026-01-09 01:32:25.032192 | orchestrator | Create test user -------------------------------------------------------- 4.24s 2026-01-09 01:32:25.032197 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.19s 2026-01-09 01:32:25.032202 | orchestrator | Create test project ----------------------------------------------------- 4.00s 2026-01-09 01:32:25.032206 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.98s 2026-01-09 01:32:25.032211 | orchestrator | Create icmp security group ---------------------------------------------- 3.92s 2026-01-09 01:32:25.032216 | orchestrator | Create test keypair ----------------------------------------------------- 3.91s 2026-01-09 01:32:25.362283 | orchestrator | + server_list 2026-01-09 01:32:25.362365 | orchestrator | + openstack --os-cloud test server list 2026-01-09 01:32:29.274952 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-09 01:32:29.275039 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-01-09 01:32:29.275047 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-09 01:32:29.275053 | orchestrator | | 1ae51e61-6e80-4da0-8970-0c18d56246a5 | test-4 | ACTIVE | test=192.168.112.132, 192.168.200.254 | N/A (booted from volume) | SCS-1L-1 | 2026-01-09 01:32:29.275084 | orchestrator | | 925b95e3-480a-4a17-b850-0d7e2b6149b2 | test-3 | ACTIVE | test=192.168.112.126, 192.168.200.231 | N/A (booted from volume) | SCS-1L-1 | 2026-01-09 01:32:29.275094 | orchestrator | | 95be74b0-5ba7-4871-a4cc-1f67a4cae876 | test-2 | ACTIVE | test=192.168.112.163, 192.168.200.157 | N/A (booted from volume) | SCS-1L-1 | 2026-01-09 01:32:29.275103 | orchestrator | | e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 | test-1 | ACTIVE | test=192.168.112.193, 192.168.200.66 | N/A (booted from volume) | SCS-1L-1 | 2026-01-09 01:32:29.275108 | orchestrator | | 0a7520eb-2c4f-48c1-994c-61a8501880f8 | test | ACTIVE | test=192.168.112.185, 192.168.200.194 | N/A (booted from volume) | SCS-1L-1 | 2026-01-09 01:32:29.275114 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-09 01:32:29.574420 | orchestrator | + openstack --os-cloud test server show test 2026-01-09 01:32:33.026954 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:33.027048 | orchestrator | | Field | Value | 2026-01-09 01:32:33.027077 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:33.027083 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-09 01:32:33.027099 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-09 01:32:33.027104 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-09 01:32:33.027108 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-01-09 01:32:33.027112 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-09 01:32:33.027115 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-09 01:32:33.027130 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-09 01:32:33.027134 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-09 01:32:33.027138 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-09 01:32:33.027149 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-09 01:32:33.027153 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-09 01:32:33.027160 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-09 01:32:33.027164 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-09 01:32:33.027168 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-09 01:32:33.027172 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-09 01:32:33.027176 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-09T01:27:10.000000 | 2026-01-09 01:32:33.027184 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-09 01:32:33.027188 | orchestrator | | accessIPv4 | | 2026-01-09 01:32:33.027192 | orchestrator | | accessIPv6 | | 2026-01-09 01:32:33.027199 | orchestrator | | addresses | test=192.168.112.185, 192.168.200.194 | 2026-01-09 01:32:33.027206 | orchestrator | | config_drive | | 2026-01-09 01:32:33.027210 | orchestrator | | created | 2026-01-09T01:26:34Z | 2026-01-09 01:32:33.027213 | orchestrator | | description | None | 2026-01-09 01:32:33.027217 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-09 01:32:33.027221 | orchestrator | | hostId | bc6059727ec69cc287d36ba2409b4598aeea994350a4de8646a8bcda | 2026-01-09 01:32:33.027225 | orchestrator | | host_status | None | 2026-01-09 01:32:33.027233 | orchestrator | | id | 0a7520eb-2c4f-48c1-994c-61a8501880f8 | 2026-01-09 01:32:33.027237 | orchestrator | | image | N/A (booted from volume) | 2026-01-09 01:32:33.027241 | orchestrator | | key_name | test | 2026-01-09 01:32:33.027249 | orchestrator | | locked | False | 2026-01-09 01:32:33.027253 | orchestrator | | locked_reason | None | 2026-01-09 01:32:33.027257 | orchestrator | | name | test | 2026-01-09 01:32:33.027261 | orchestrator | | pinned_availability_zone | None | 2026-01-09 01:32:33.027265 | orchestrator | | progress | 0 | 2026-01-09 01:32:33.027268 | orchestrator | | project_id | 23ca2481eb324707b8cdf204fb6cc5ce | 2026-01-09 01:32:33.027272 | orchestrator | | properties | hostname='test' | 2026-01-09 01:32:33.027280 | orchestrator | | security_groups | name='icmp' | 2026-01-09 01:32:33.027284 | orchestrator | | | name='ssh' | 2026-01-09 01:32:33.027293 | orchestrator | | server_groups | None | 2026-01-09 01:32:33.027297 | orchestrator | | status | ACTIVE | 2026-01-09 01:32:33.027300 | orchestrator | | tags | test | 2026-01-09 01:32:33.027304 | orchestrator | | trusted_image_certificates | None | 2026-01-09 01:32:33.027308 | orchestrator | | updated | 2026-01-09T01:31:04Z | 2026-01-09 01:32:33.027313 | orchestrator | | user_id | edb6249862f14efa833425bce17ce2fc | 2026-01-09 01:32:33.027317 | orchestrator | | volumes_attached | delete_on_termination='True', id='e2137207-c5f6-4d80-8fb1-4b029a186630' | 2026-01-09 01:32:33.027321 | orchestrator | | | delete_on_termination='False', id='fdd2c610-3103-4f9d-95d6-89990c4e9f6c' | 2026-01-09 01:32:33.031885 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:33.334756 | orchestrator | + openstack --os-cloud test server show test-1 2026-01-09 01:32:36.690452 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:36.690572 | orchestrator | | Field | Value | 2026-01-09 01:32:36.690581 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:36.690587 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-09 01:32:36.690592 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-09 01:32:36.690597 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-09 01:32:36.690601 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-01-09 01:32:36.690606 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-09 01:32:36.690611 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-09 01:32:36.690627 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-09 01:32:36.690647 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-09 01:32:36.690653 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-09 01:32:36.690666 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-09 01:32:36.690671 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-09 01:32:36.690682 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-09 01:32:36.690692 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-09 01:32:36.690697 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-09 01:32:36.690702 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-09 01:32:36.690724 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-09T01:28:09.000000 | 2026-01-09 01:32:36.690742 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-09 01:32:36.690748 | orchestrator | | accessIPv4 | | 2026-01-09 01:32:36.690756 | orchestrator | | accessIPv6 | | 2026-01-09 01:32:36.690761 | orchestrator | | addresses | test=192.168.112.193, 192.168.200.66 | 2026-01-09 01:32:36.690766 | orchestrator | | config_drive | | 2026-01-09 01:32:36.690771 | orchestrator | | created | 2026-01-09T01:27:34Z | 2026-01-09 01:32:36.690775 | orchestrator | | description | None | 2026-01-09 01:32:36.690780 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-09 01:32:36.690785 | orchestrator | | hostId | 7918b9bf2e7b412eb8460a382b265cc439594ac08a5e9add7a0467ef | 2026-01-09 01:32:36.690795 | orchestrator | | host_status | None | 2026-01-09 01:32:36.690804 | orchestrator | | id | e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 | 2026-01-09 01:32:36.690809 | orchestrator | | image | N/A (booted from volume) | 2026-01-09 01:32:36.690817 | orchestrator | | key_name | test | 2026-01-09 01:32:36.690822 | orchestrator | | locked | False | 2026-01-09 01:32:36.690826 | orchestrator | | locked_reason | None | 2026-01-09 01:32:36.690831 | orchestrator | | name | test-1 | 2026-01-09 01:32:36.690836 | orchestrator | | pinned_availability_zone | None | 2026-01-09 01:32:36.690841 | orchestrator | | progress | 0 | 2026-01-09 01:32:36.690849 | orchestrator | | project_id | 23ca2481eb324707b8cdf204fb6cc5ce | 2026-01-09 01:32:36.690853 | orchestrator | | properties | hostname='test-1' | 2026-01-09 01:32:36.690863 | orchestrator | | security_groups | name='icmp' | 2026-01-09 01:32:36.690868 | orchestrator | | | name='ssh' | 2026-01-09 01:32:36.690876 | orchestrator | | server_groups | None | 2026-01-09 01:32:36.690881 | orchestrator | | status | ACTIVE | 2026-01-09 01:32:36.690885 | orchestrator | | tags | test | 2026-01-09 01:32:36.690890 | orchestrator | | trusted_image_certificates | None | 2026-01-09 01:32:36.690895 | orchestrator | | updated | 2026-01-09T01:31:09Z | 2026-01-09 01:32:36.690900 | orchestrator | | user_id | edb6249862f14efa833425bce17ce2fc | 2026-01-09 01:32:36.690908 | orchestrator | | volumes_attached | delete_on_termination='True', id='c73ea073-8218-47e3-8222-8db64e208a06' | 2026-01-09 01:32:36.694719 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:36.962290 | orchestrator | + openstack --os-cloud test server show test-2 2026-01-09 01:32:39.898099 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:39.898197 | orchestrator | | Field | Value | 2026-01-09 01:32:39.898222 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:39.898231 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-09 01:32:39.898239 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-09 01:32:39.898246 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-09 01:32:39.898254 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-01-09 01:32:39.898280 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-09 01:32:39.898288 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-09 01:32:39.898310 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-09 01:32:39.898319 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-09 01:32:39.898326 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-09 01:32:39.898338 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-09 01:32:39.898346 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-09 01:32:39.898353 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-09 01:32:39.898361 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-09 01:32:39.898375 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-09 01:32:39.898383 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-09 01:32:39.898396 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-09T01:29:06.000000 | 2026-01-09 01:32:39.898414 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-09 01:32:39.898427 | orchestrator | | accessIPv4 | | 2026-01-09 01:32:39.898454 | orchestrator | | accessIPv6 | | 2026-01-09 01:32:39.898467 | orchestrator | | addresses | test=192.168.112.163, 192.168.200.157 | 2026-01-09 01:32:39.898479 | orchestrator | | config_drive | | 2026-01-09 01:32:39.898491 | orchestrator | | created | 2026-01-09T01:28:28Z | 2026-01-09 01:32:39.898525 | orchestrator | | description | None | 2026-01-09 01:32:39.898537 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-09 01:32:39.898549 | orchestrator | | hostId | 953880e8af7f21332a220ed04e7acbea83ea84a407322ebefc847062 | 2026-01-09 01:32:39.898563 | orchestrator | | host_status | None | 2026-01-09 01:32:39.898583 | orchestrator | | id | 95be74b0-5ba7-4871-a4cc-1f67a4cae876 | 2026-01-09 01:32:39.898596 | orchestrator | | image | N/A (booted from volume) | 2026-01-09 01:32:39.898614 | orchestrator | | key_name | test | 2026-01-09 01:32:39.898626 | orchestrator | | locked | False | 2026-01-09 01:32:39.898639 | orchestrator | | locked_reason | None | 2026-01-09 01:32:39.898649 | orchestrator | | name | test-2 | 2026-01-09 01:32:39.898672 | orchestrator | | pinned_availability_zone | None | 2026-01-09 01:32:39.898684 | orchestrator | | progress | 0 | 2026-01-09 01:32:39.898696 | orchestrator | | project_id | 23ca2481eb324707b8cdf204fb6cc5ce | 2026-01-09 01:32:39.898710 | orchestrator | | properties | hostname='test-2' | 2026-01-09 01:32:39.898730 | orchestrator | | security_groups | name='icmp' | 2026-01-09 01:32:39.898744 | orchestrator | | | name='ssh' | 2026-01-09 01:32:39.898762 | orchestrator | | server_groups | None | 2026-01-09 01:32:39.898775 | orchestrator | | status | ACTIVE | 2026-01-09 01:32:39.898789 | orchestrator | | tags | test | 2026-01-09 01:32:39.898810 | orchestrator | | trusted_image_certificates | None | 2026-01-09 01:32:39.898821 | orchestrator | | updated | 2026-01-09T01:31:14Z | 2026-01-09 01:32:39.898831 | orchestrator | | user_id | edb6249862f14efa833425bce17ce2fc | 2026-01-09 01:32:39.898839 | orchestrator | | volumes_attached | delete_on_termination='True', id='6021c212-28c4-47ce-a3a9-536eb3109ff8' | 2026-01-09 01:32:39.902804 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:40.188884 | orchestrator | + openstack --os-cloud test server show test-3 2026-01-09 01:32:43.188021 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:43.188203 | orchestrator | | Field | Value | 2026-01-09 01:32:43.188234 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:43.188244 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-09 01:32:43.188274 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-09 01:32:43.188284 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-09 01:32:43.188293 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-01-09 01:32:43.188312 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-09 01:32:43.188321 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-09 01:32:43.188346 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-09 01:32:43.188356 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-09 01:32:43.188365 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-09 01:32:43.188379 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-09 01:32:43.188396 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-09 01:32:43.188404 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-09 01:32:43.188414 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-09 01:32:43.188423 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-09 01:32:43.188432 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-09 01:32:43.188442 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-09T01:29:56.000000 | 2026-01-09 01:32:43.188454 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-09 01:32:43.188460 | orchestrator | | accessIPv4 | | 2026-01-09 01:32:43.188466 | orchestrator | | accessIPv6 | | 2026-01-09 01:32:43.188479 | orchestrator | | addresses | test=192.168.112.126, 192.168.200.231 | 2026-01-09 01:32:43.188484 | orchestrator | | config_drive | | 2026-01-09 01:32:43.188490 | orchestrator | | created | 2026-01-09T01:29:28Z | 2026-01-09 01:32:43.188496 | orchestrator | | description | None | 2026-01-09 01:32:43.188501 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-09 01:32:43.188507 | orchestrator | | hostId | bc6059727ec69cc287d36ba2409b4598aeea994350a4de8646a8bcda | 2026-01-09 01:32:43.188512 | orchestrator | | host_status | None | 2026-01-09 01:32:43.188523 | orchestrator | | id | 925b95e3-480a-4a17-b850-0d7e2b6149b2 | 2026-01-09 01:32:43.188529 | orchestrator | | image | N/A (booted from volume) | 2026-01-09 01:32:43.188535 | orchestrator | | key_name | test | 2026-01-09 01:32:43.188547 | orchestrator | | locked | False | 2026-01-09 01:32:43.188553 | orchestrator | | locked_reason | None | 2026-01-09 01:32:43.188558 | orchestrator | | name | test-3 | 2026-01-09 01:32:43.188564 | orchestrator | | pinned_availability_zone | None | 2026-01-09 01:32:43.188569 | orchestrator | | progress | 0 | 2026-01-09 01:32:43.188575 | orchestrator | | project_id | 23ca2481eb324707b8cdf204fb6cc5ce | 2026-01-09 01:32:43.188580 | orchestrator | | properties | hostname='test-3' | 2026-01-09 01:32:43.188590 | orchestrator | | security_groups | name='icmp' | 2026-01-09 01:32:43.188596 | orchestrator | | | name='ssh' | 2026-01-09 01:32:43.188606 | orchestrator | | server_groups | None | 2026-01-09 01:32:43.188612 | orchestrator | | status | ACTIVE | 2026-01-09 01:32:43.188946 | orchestrator | | tags | test | 2026-01-09 01:32:43.188965 | orchestrator | | trusted_image_certificates | None | 2026-01-09 01:32:43.188971 | orchestrator | | updated | 2026-01-09T01:31:18Z | 2026-01-09 01:32:43.188977 | orchestrator | | user_id | edb6249862f14efa833425bce17ce2fc | 2026-01-09 01:32:43.188982 | orchestrator | | volumes_attached | delete_on_termination='True', id='bc53e3b4-a080-4eea-9aa2-1d2ac2b910da' | 2026-01-09 01:32:43.192837 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:43.457485 | orchestrator | + openstack --os-cloud test server show test-4 2026-01-09 01:32:46.391673 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:46.391801 | orchestrator | | Field | Value | 2026-01-09 01:32:46.391814 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:46.391822 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-09 01:32:46.391829 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-09 01:32:46.391842 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-09 01:32:46.391854 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-01-09 01:32:46.391871 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-09 01:32:46.391891 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-09 01:32:46.391929 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-09 01:32:46.391953 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-09 01:32:46.391965 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-09 01:32:46.391977 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-09 01:32:46.391988 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-09 01:32:46.391999 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-09 01:32:46.392010 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-09 01:32:46.392022 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-09 01:32:46.392033 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-09 01:32:46.392092 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-09T01:30:45.000000 | 2026-01-09 01:32:46.392135 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-09 01:32:46.392149 | orchestrator | | accessIPv4 | | 2026-01-09 01:32:46.392159 | orchestrator | | accessIPv6 | | 2026-01-09 01:32:46.392166 | orchestrator | | addresses | test=192.168.112.132, 192.168.200.254 | 2026-01-09 01:32:46.392176 | orchestrator | | config_drive | | 2026-01-09 01:32:46.392185 | orchestrator | | created | 2026-01-09T01:30:16Z | 2026-01-09 01:32:46.392193 | orchestrator | | description | None | 2026-01-09 01:32:46.392202 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-09 01:32:46.392211 | orchestrator | | hostId | 7918b9bf2e7b412eb8460a382b265cc439594ac08a5e9add7a0467ef | 2026-01-09 01:32:46.392220 | orchestrator | | host_status | None | 2026-01-09 01:32:46.392246 | orchestrator | | id | 1ae51e61-6e80-4da0-8970-0c18d56246a5 | 2026-01-09 01:32:46.392255 | orchestrator | | image | N/A (booted from volume) | 2026-01-09 01:32:46.392263 | orchestrator | | key_name | test | 2026-01-09 01:32:46.392272 | orchestrator | | locked | False | 2026-01-09 01:32:46.392282 | orchestrator | | locked_reason | None | 2026-01-09 01:32:46.392302 | orchestrator | | name | test-4 | 2026-01-09 01:32:46.392320 | orchestrator | | pinned_availability_zone | None | 2026-01-09 01:32:46.392328 | orchestrator | | progress | 0 | 2026-01-09 01:32:46.392337 | orchestrator | | project_id | 23ca2481eb324707b8cdf204fb6cc5ce | 2026-01-09 01:32:46.392352 | orchestrator | | properties | hostname='test-4' | 2026-01-09 01:32:46.392371 | orchestrator | | security_groups | name='icmp' | 2026-01-09 01:32:46.392379 | orchestrator | | | name='ssh' | 2026-01-09 01:32:46.392387 | orchestrator | | server_groups | None | 2026-01-09 01:32:46.392394 | orchestrator | | status | ACTIVE | 2026-01-09 01:32:46.392402 | orchestrator | | tags | test | 2026-01-09 01:32:46.392409 | orchestrator | | trusted_image_certificates | None | 2026-01-09 01:32:46.392417 | orchestrator | | updated | 2026-01-09T01:31:23Z | 2026-01-09 01:32:46.392425 | orchestrator | | user_id | edb6249862f14efa833425bce17ce2fc | 2026-01-09 01:32:46.392438 | orchestrator | | volumes_attached | delete_on_termination='True', id='55aaf88f-edf7-4f8c-96f4-e45e490bd83a' | 2026-01-09 01:32:46.396102 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-09 01:32:46.684623 | orchestrator | + server_ping 2026-01-09 01:32:46.685946 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-09 01:32:46.686006 | orchestrator | ++ tr -d '\r' 2026-01-09 01:32:49.526350 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:32:49.526468 | orchestrator | + ping -c3 192.168.112.163 2026-01-09 01:32:49.541265 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2026-01-09 01:32:49.541369 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=6.24 ms 2026-01-09 01:32:50.536965 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=1.51 ms 2026-01-09 01:32:51.538872 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=1.66 ms 2026-01-09 01:32:51.538963 | orchestrator | 2026-01-09 01:32:51.538972 | orchestrator | --- 192.168.112.163 ping statistics --- 2026-01-09 01:32:51.538980 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-01-09 01:32:51.538985 | orchestrator | rtt min/avg/max/mdev = 1.506/3.136/6.239/2.194 ms 2026-01-09 01:32:51.538991 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:32:51.538997 | orchestrator | + ping -c3 192.168.112.126 2026-01-09 01:32:51.549555 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2026-01-09 01:32:51.549643 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=5.38 ms 2026-01-09 01:32:52.548159 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=2.28 ms 2026-01-09 01:32:53.549518 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=1.85 ms 2026-01-09 01:32:53.549619 | orchestrator | 2026-01-09 01:32:53.549706 | orchestrator | --- 192.168.112.126 ping statistics --- 2026-01-09 01:32:53.549716 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:32:53.549721 | orchestrator | rtt min/avg/max/mdev = 1.845/3.165/5.375/1.572 ms 2026-01-09 01:32:53.549736 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:32:53.549741 | orchestrator | + ping -c3 192.168.112.193 2026-01-09 01:32:53.562573 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-01-09 01:32:53.562665 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=9.04 ms 2026-01-09 01:32:54.557527 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.50 ms 2026-01-09 01:32:55.558715 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=2.05 ms 2026-01-09 01:32:55.558808 | orchestrator | 2026-01-09 01:32:55.558817 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-01-09 01:32:55.558823 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:32:55.558828 | orchestrator | rtt min/avg/max/mdev = 2.054/4.532/9.041/3.193 ms 2026-01-09 01:32:55.559198 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:32:55.559229 | orchestrator | + ping -c3 192.168.112.185 2026-01-09 01:32:55.569469 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-01-09 01:32:55.569564 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=5.70 ms 2026-01-09 01:32:56.568578 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.73 ms 2026-01-09 01:32:57.569217 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.51 ms 2026-01-09 01:32:57.569307 | orchestrator | 2026-01-09 01:32:57.569314 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-01-09 01:32:57.569320 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-09 01:32:57.569325 | orchestrator | rtt min/avg/max/mdev = 1.507/3.312/5.700/1.760 ms 2026-01-09 01:32:57.569629 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:32:57.569641 | orchestrator | + ping -c3 192.168.112.132 2026-01-09 01:32:57.581021 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-01-09 01:32:57.581147 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=5.48 ms 2026-01-09 01:32:58.579504 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.15 ms 2026-01-09 01:32:59.580640 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.88 ms 2026-01-09 01:32:59.581615 | orchestrator | 2026-01-09 01:32:59.581673 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-01-09 01:32:59.581683 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:32:59.581691 | orchestrator | rtt min/avg/max/mdev = 1.883/3.170/5.480/1.636 ms 2026-01-09 01:32:59.581709 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-09 01:32:59.581717 | orchestrator | + compute_list 2026-01-09 01:32:59.581723 | orchestrator | + osism manage compute list testbed-node-3 2026-01-09 01:33:03.273456 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:33:03.273547 | orchestrator | | ID | Name | Status | 2026-01-09 01:33:03.273555 | orchestrator | |--------------------------------------+--------+----------| 2026-01-09 01:33:03.273560 | orchestrator | | 925b95e3-480a-4a17-b850-0d7e2b6149b2 | test-3 | ACTIVE | 2026-01-09 01:33:03.273565 | orchestrator | | 0a7520eb-2c4f-48c1-994c-61a8501880f8 | test | ACTIVE | 2026-01-09 01:33:03.273570 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:33:03.607661 | orchestrator | + osism manage compute list testbed-node-4 2026-01-09 01:33:07.158512 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:33:07.158614 | orchestrator | | ID | Name | Status | 2026-01-09 01:33:07.158620 | orchestrator | |--------------------------------------+--------+----------| 2026-01-09 01:33:07.158624 | orchestrator | | 95be74b0-5ba7-4871-a4cc-1f67a4cae876 | test-2 | ACTIVE | 2026-01-09 01:33:07.158629 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:33:07.520371 | orchestrator | + osism manage compute list testbed-node-5 2026-01-09 01:33:11.000679 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:33:11.000793 | orchestrator | | ID | Name | Status | 2026-01-09 01:33:11.000808 | orchestrator | |--------------------------------------+--------+----------| 2026-01-09 01:33:11.000817 | orchestrator | | 1ae51e61-6e80-4da0-8970-0c18d56246a5 | test-4 | ACTIVE | 2026-01-09 01:33:11.000825 | orchestrator | | e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 | test-1 | ACTIVE | 2026-01-09 01:33:11.000832 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:33:11.402737 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-01-09 01:33:14.692849 | orchestrator | 2026-01-09 01:33:14 | INFO  | Live migrating server 95be74b0-5ba7-4871-a4cc-1f67a4cae876 2026-01-09 01:33:28.460489 | orchestrator | 2026-01-09 01:33:28 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:33:30.848441 | orchestrator | 2026-01-09 01:33:30 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:33:33.305630 | orchestrator | 2026-01-09 01:33:33 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:33:35.570649 | orchestrator | 2026-01-09 01:33:35 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:33:37.966451 | orchestrator | 2026-01-09 01:33:37 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:33:40.239846 | orchestrator | 2026-01-09 01:33:40 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:33:42.648206 | orchestrator | 2026-01-09 01:33:42 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:33:44.953295 | orchestrator | 2026-01-09 01:33:44 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:33:47.278673 | orchestrator | 2026-01-09 01:33:47 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) completed with status ACTIVE 2026-01-09 01:33:47.650637 | orchestrator | + compute_list 2026-01-09 01:33:47.650791 | orchestrator | + osism manage compute list testbed-node-3 2026-01-09 01:33:50.821330 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:33:50.821417 | orchestrator | | ID | Name | Status | 2026-01-09 01:33:50.821422 | orchestrator | |--------------------------------------+--------+----------| 2026-01-09 01:33:50.821427 | orchestrator | | 925b95e3-480a-4a17-b850-0d7e2b6149b2 | test-3 | ACTIVE | 2026-01-09 01:33:50.821432 | orchestrator | | 95be74b0-5ba7-4871-a4cc-1f67a4cae876 | test-2 | ACTIVE | 2026-01-09 01:33:50.821436 | orchestrator | | 0a7520eb-2c4f-48c1-994c-61a8501880f8 | test | ACTIVE | 2026-01-09 01:33:50.821441 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:33:51.183818 | orchestrator | + osism manage compute list testbed-node-4 2026-01-09 01:33:54.089441 | orchestrator | +------+--------+----------+ 2026-01-09 01:33:54.089543 | orchestrator | | ID | Name | Status | 2026-01-09 01:33:54.089555 | orchestrator | |------+--------+----------| 2026-01-09 01:33:54.089562 | orchestrator | +------+--------+----------+ 2026-01-09 01:33:54.444053 | orchestrator | + osism manage compute list testbed-node-5 2026-01-09 01:33:58.187347 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:33:58.187455 | orchestrator | | ID | Name | Status | 2026-01-09 01:33:58.187475 | orchestrator | |--------------------------------------+--------+----------| 2026-01-09 01:33:58.187482 | orchestrator | | 1ae51e61-6e80-4da0-8970-0c18d56246a5 | test-4 | ACTIVE | 2026-01-09 01:33:58.187489 | orchestrator | | e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 | test-1 | ACTIVE | 2026-01-09 01:33:58.187496 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:33:58.546810 | orchestrator | + server_ping 2026-01-09 01:33:58.548487 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-09 01:33:58.548944 | orchestrator | ++ tr -d '\r' 2026-01-09 01:34:01.491140 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:34:01.491952 | orchestrator | + ping -c3 192.168.112.163 2026-01-09 01:34:01.499549 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2026-01-09 01:34:01.499632 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=5.28 ms 2026-01-09 01:34:02.499275 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=2.79 ms 2026-01-09 01:34:03.500293 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=2.01 ms 2026-01-09 01:34:03.500387 | orchestrator | 2026-01-09 01:34:03.500400 | orchestrator | --- 192.168.112.163 ping statistics --- 2026-01-09 01:34:03.500409 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:34:03.500417 | orchestrator | rtt min/avg/max/mdev = 2.008/3.358/5.278/1.394 ms 2026-01-09 01:34:03.500426 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:34:03.500434 | orchestrator | + ping -c3 192.168.112.126 2026-01-09 01:34:03.513720 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2026-01-09 01:34:03.513812 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=8.50 ms 2026-01-09 01:34:04.509341 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=2.27 ms 2026-01-09 01:34:05.510478 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=1.79 ms 2026-01-09 01:34:05.510575 | orchestrator | 2026-01-09 01:34:05.510584 | orchestrator | --- 192.168.112.126 ping statistics --- 2026-01-09 01:34:05.510592 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:34:05.510600 | orchestrator | rtt min/avg/max/mdev = 1.788/4.184/8.500/3.057 ms 2026-01-09 01:34:05.511141 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:34:05.511200 | orchestrator | + ping -c3 192.168.112.193 2026-01-09 01:34:05.522499 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-01-09 01:34:05.522581 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=6.49 ms 2026-01-09 01:34:06.519766 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.17 ms 2026-01-09 01:34:07.520962 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.79 ms 2026-01-09 01:34:07.521097 | orchestrator | 2026-01-09 01:34:07.521105 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-01-09 01:34:07.521111 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:34:07.521116 | orchestrator | rtt min/avg/max/mdev = 1.792/3.481/6.487/2.130 ms 2026-01-09 01:34:07.521324 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:34:07.521489 | orchestrator | + ping -c3 192.168.112.185 2026-01-09 01:34:07.529717 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-01-09 01:34:07.529815 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=4.95 ms 2026-01-09 01:34:08.528033 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=1.97 ms 2026-01-09 01:34:09.529296 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=1.81 ms 2026-01-09 01:34:09.529402 | orchestrator | 2026-01-09 01:34:09.529414 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-01-09 01:34:09.529423 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-09 01:34:09.529430 | orchestrator | rtt min/avg/max/mdev = 1.809/2.909/4.947/1.442 ms 2026-01-09 01:34:09.529638 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:34:09.529651 | orchestrator | + ping -c3 192.168.112.132 2026-01-09 01:34:09.542465 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-01-09 01:34:09.542563 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=7.83 ms 2026-01-09 01:34:10.537379 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.16 ms 2026-01-09 01:34:11.539225 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.98 ms 2026-01-09 01:34:11.583674 | orchestrator | 2026-01-09 01:34:11.583763 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-01-09 01:34:11.583771 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-09 01:34:11.583778 | orchestrator | rtt min/avg/max/mdev = 1.980/3.990/7.833/2.717 ms 2026-01-09 01:34:11.583801 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-01-09 01:34:14.854186 | orchestrator | 2026-01-09 01:34:14 | INFO  | Live migrating server 1ae51e61-6e80-4da0-8970-0c18d56246a5 2026-01-09 01:34:26.798226 | orchestrator | 2026-01-09 01:34:26 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:34:29.153617 | orchestrator | 2026-01-09 01:34:29 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:34:31.484523 | orchestrator | 2026-01-09 01:34:31 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:34:33.867808 | orchestrator | 2026-01-09 01:34:33 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:34:36.149492 | orchestrator | 2026-01-09 01:34:36 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:34:38.429599 | orchestrator | 2026-01-09 01:34:38 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:34:40.684046 | orchestrator | 2026-01-09 01:34:40 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:34:42.962139 | orchestrator | 2026-01-09 01:34:42 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:34:45.307116 | orchestrator | 2026-01-09 01:34:45 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:34:47.651230 | orchestrator | 2026-01-09 01:34:47 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) completed with status ACTIVE 2026-01-09 01:34:47.651311 | orchestrator | 2026-01-09 01:34:47 | INFO  | Live migrating server e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 2026-01-09 01:35:00.269146 | orchestrator | 2026-01-09 01:35:00 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:35:02.649367 | orchestrator | 2026-01-09 01:35:02 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:35:04.997740 | orchestrator | 2026-01-09 01:35:04 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:35:07.382918 | orchestrator | 2026-01-09 01:35:07 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:35:09.756689 | orchestrator | 2026-01-09 01:35:09 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:35:12.046349 | orchestrator | 2026-01-09 01:35:12 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:35:14.296028 | orchestrator | 2026-01-09 01:35:14 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:35:16.565819 | orchestrator | 2026-01-09 01:35:16 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:35:18.831912 | orchestrator | 2026-01-09 01:35:18 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) completed with status ACTIVE 2026-01-09 01:35:19.217569 | orchestrator | + compute_list 2026-01-09 01:35:19.217657 | orchestrator | + osism manage compute list testbed-node-3 2026-01-09 01:35:22.758475 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:35:22.758557 | orchestrator | | ID | Name | Status | 2026-01-09 01:35:22.758563 | orchestrator | |--------------------------------------+--------+----------| 2026-01-09 01:35:22.758568 | orchestrator | | 1ae51e61-6e80-4da0-8970-0c18d56246a5 | test-4 | ACTIVE | 2026-01-09 01:35:22.758572 | orchestrator | | 925b95e3-480a-4a17-b850-0d7e2b6149b2 | test-3 | ACTIVE | 2026-01-09 01:35:22.758576 | orchestrator | | 95be74b0-5ba7-4871-a4cc-1f67a4cae876 | test-2 | ACTIVE | 2026-01-09 01:35:22.758580 | orchestrator | | e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 | test-1 | ACTIVE | 2026-01-09 01:35:22.758584 | orchestrator | | 0a7520eb-2c4f-48c1-994c-61a8501880f8 | test | ACTIVE | 2026-01-09 01:35:22.758588 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:35:23.144161 | orchestrator | + osism manage compute list testbed-node-4 2026-01-09 01:35:26.053926 | orchestrator | +------+--------+----------+ 2026-01-09 01:35:26.054137 | orchestrator | | ID | Name | Status | 2026-01-09 01:35:26.054148 | orchestrator | |------+--------+----------| 2026-01-09 01:35:26.054155 | orchestrator | +------+--------+----------+ 2026-01-09 01:35:26.424046 | orchestrator | + osism manage compute list testbed-node-5 2026-01-09 01:35:29.237761 | orchestrator | +------+--------+----------+ 2026-01-09 01:35:29.237893 | orchestrator | | ID | Name | Status | 2026-01-09 01:35:29.237903 | orchestrator | |------+--------+----------| 2026-01-09 01:35:29.237907 | orchestrator | +------+--------+----------+ 2026-01-09 01:35:29.481298 | orchestrator | + server_ping 2026-01-09 01:35:29.481511 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-09 01:35:29.481538 | orchestrator | ++ tr -d '\r' 2026-01-09 01:35:32.157821 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:35:32.157986 | orchestrator | + ping -c3 192.168.112.163 2026-01-09 01:35:32.167072 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2026-01-09 01:35:32.167168 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=5.81 ms 2026-01-09 01:35:33.164982 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=2.38 ms 2026-01-09 01:35:34.166768 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=1.96 ms 2026-01-09 01:35:34.166868 | orchestrator | 2026-01-09 01:35:34.166880 | orchestrator | --- 192.168.112.163 ping statistics --- 2026-01-09 01:35:34.166888 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:35:34.166894 | orchestrator | rtt min/avg/max/mdev = 1.961/3.385/5.812/1.724 ms 2026-01-09 01:35:34.166902 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:35:34.167172 | orchestrator | + ping -c3 192.168.112.126 2026-01-09 01:35:34.178280 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2026-01-09 01:35:34.178385 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=7.51 ms 2026-01-09 01:35:35.175309 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=2.47 ms 2026-01-09 01:35:36.176403 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=1.79 ms 2026-01-09 01:35:36.177040 | orchestrator | 2026-01-09 01:35:36.177106 | orchestrator | --- 192.168.112.126 ping statistics --- 2026-01-09 01:35:36.177120 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-09 01:35:36.177130 | orchestrator | rtt min/avg/max/mdev = 1.788/3.921/7.507/2.550 ms 2026-01-09 01:35:36.177557 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:35:36.177585 | orchestrator | + ping -c3 192.168.112.193 2026-01-09 01:35:36.190237 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-01-09 01:35:36.190346 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=7.45 ms 2026-01-09 01:35:37.186068 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=1.86 ms 2026-01-09 01:35:38.187000 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.45 ms 2026-01-09 01:35:38.187094 | orchestrator | 2026-01-09 01:35:38.187102 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-01-09 01:35:38.187108 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:35:38.187112 | orchestrator | rtt min/avg/max/mdev = 1.445/3.586/7.451/2.737 ms 2026-01-09 01:35:38.187677 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:35:38.187705 | orchestrator | + ping -c3 192.168.112.185 2026-01-09 01:35:38.196003 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-01-09 01:35:38.196107 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=4.27 ms 2026-01-09 01:35:39.195241 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.05 ms 2026-01-09 01:35:40.196348 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.14 ms 2026-01-09 01:35:40.196451 | orchestrator | 2026-01-09 01:35:40.196460 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-01-09 01:35:40.196466 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:35:40.196470 | orchestrator | rtt min/avg/max/mdev = 2.052/2.820/4.273/1.027 ms 2026-01-09 01:35:40.196719 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:35:40.196744 | orchestrator | + ping -c3 192.168.112.132 2026-01-09 01:35:40.205251 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-01-09 01:35:40.205324 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=5.02 ms 2026-01-09 01:35:41.204739 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.97 ms 2026-01-09 01:35:42.206141 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=2.11 ms 2026-01-09 01:35:42.206303 | orchestrator | 2026-01-09 01:35:42.206335 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-01-09 01:35:42.206356 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:35:42.206376 | orchestrator | rtt min/avg/max/mdev = 2.106/3.365/5.021/1.222 ms 2026-01-09 01:35:42.206388 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-01-09 01:35:45.685324 | orchestrator | 2026-01-09 01:35:45 | INFO  | Live migrating server 1ae51e61-6e80-4da0-8970-0c18d56246a5 2026-01-09 01:35:56.653237 | orchestrator | 2026-01-09 01:35:56 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:35:58.984464 | orchestrator | 2026-01-09 01:35:58 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:36:01.337824 | orchestrator | 2026-01-09 01:36:01 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:36:03.693665 | orchestrator | 2026-01-09 01:36:03 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:36:05.963319 | orchestrator | 2026-01-09 01:36:05 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:36:08.335555 | orchestrator | 2026-01-09 01:36:08 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:36:10.617395 | orchestrator | 2026-01-09 01:36:10 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:36:12.897980 | orchestrator | 2026-01-09 01:36:12 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:36:15.187806 | orchestrator | 2026-01-09 01:36:15 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) completed with status ACTIVE 2026-01-09 01:36:15.187902 | orchestrator | 2026-01-09 01:36:15 | INFO  | Live migrating server 925b95e3-480a-4a17-b850-0d7e2b6149b2 2026-01-09 01:36:27.402320 | orchestrator | 2026-01-09 01:36:27 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:36:29.719554 | orchestrator | 2026-01-09 01:36:29 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:36:32.087673 | orchestrator | 2026-01-09 01:36:32 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:36:34.348473 | orchestrator | 2026-01-09 01:36:34 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:36:36.668660 | orchestrator | 2026-01-09 01:36:36 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:36:38.935983 | orchestrator | 2026-01-09 01:36:38 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:36:41.428954 | orchestrator | 2026-01-09 01:36:41 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:36:43.778291 | orchestrator | 2026-01-09 01:36:43 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:36:46.067103 | orchestrator | 2026-01-09 01:36:46 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) completed with status ACTIVE 2026-01-09 01:36:46.067193 | orchestrator | 2026-01-09 01:36:46 | INFO  | Live migrating server 95be74b0-5ba7-4871-a4cc-1f67a4cae876 2026-01-09 01:36:56.759527 | orchestrator | 2026-01-09 01:36:56 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:36:59.286102 | orchestrator | 2026-01-09 01:36:59 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:37:01.620588 | orchestrator | 2026-01-09 01:37:01 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:37:03.975780 | orchestrator | 2026-01-09 01:37:03 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:37:06.249449 | orchestrator | 2026-01-09 01:37:06 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:37:08.573536 | orchestrator | 2026-01-09 01:37:08 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:37:10.890987 | orchestrator | 2026-01-09 01:37:10 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:37:13.118971 | orchestrator | 2026-01-09 01:37:13 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:37:15.416374 | orchestrator | 2026-01-09 01:37:15 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) completed with status ACTIVE 2026-01-09 01:37:15.416526 | orchestrator | 2026-01-09 01:37:15 | INFO  | Live migrating server e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 2026-01-09 01:37:25.137798 | orchestrator | 2026-01-09 01:37:25 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:37:27.492774 | orchestrator | 2026-01-09 01:37:27 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:37:29.913026 | orchestrator | 2026-01-09 01:37:29 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:37:32.198081 | orchestrator | 2026-01-09 01:37:32 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:37:34.570128 | orchestrator | 2026-01-09 01:37:34 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:37:36.882460 | orchestrator | 2026-01-09 01:37:36 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:37:39.379165 | orchestrator | 2026-01-09 01:37:39 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:37:41.875156 | orchestrator | 2026-01-09 01:37:41 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:37:44.226660 | orchestrator | 2026-01-09 01:37:44 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) completed with status ACTIVE 2026-01-09 01:37:44.226735 | orchestrator | 2026-01-09 01:37:44 | INFO  | Live migrating server 0a7520eb-2c4f-48c1-994c-61a8501880f8 2026-01-09 01:37:54.502538 | orchestrator | 2026-01-09 01:37:54 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:37:56.874261 | orchestrator | 2026-01-09 01:37:56 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:37:59.217290 | orchestrator | 2026-01-09 01:37:59 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:38:01.709519 | orchestrator | 2026-01-09 01:38:01 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:38:04.126008 | orchestrator | 2026-01-09 01:38:04 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:38:06.419945 | orchestrator | 2026-01-09 01:38:06 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:38:08.796336 | orchestrator | 2026-01-09 01:38:08 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:38:11.068405 | orchestrator | 2026-01-09 01:38:11 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:38:13.318917 | orchestrator | 2026-01-09 01:38:13 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:38:15.609754 | orchestrator | 2026-01-09 01:38:15 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:38:17.965350 | orchestrator | 2026-01-09 01:38:17 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) completed with status ACTIVE 2026-01-09 01:38:18.340706 | orchestrator | + compute_list 2026-01-09 01:38:18.340800 | orchestrator | + osism manage compute list testbed-node-3 2026-01-09 01:38:21.166596 | orchestrator | +------+--------+----------+ 2026-01-09 01:38:21.166701 | orchestrator | | ID | Name | Status | 2026-01-09 01:38:21.166715 | orchestrator | |------+--------+----------| 2026-01-09 01:38:21.166723 | orchestrator | +------+--------+----------+ 2026-01-09 01:38:21.517457 | orchestrator | + osism manage compute list testbed-node-4 2026-01-09 01:38:24.823190 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:38:24.823276 | orchestrator | | ID | Name | Status | 2026-01-09 01:38:24.823283 | orchestrator | |--------------------------------------+--------+----------| 2026-01-09 01:38:24.823288 | orchestrator | | 1ae51e61-6e80-4da0-8970-0c18d56246a5 | test-4 | ACTIVE | 2026-01-09 01:38:24.823293 | orchestrator | | 925b95e3-480a-4a17-b850-0d7e2b6149b2 | test-3 | ACTIVE | 2026-01-09 01:38:24.823298 | orchestrator | | 95be74b0-5ba7-4871-a4cc-1f67a4cae876 | test-2 | ACTIVE | 2026-01-09 01:38:24.823303 | orchestrator | | e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 | test-1 | ACTIVE | 2026-01-09 01:38:24.823311 | orchestrator | | 0a7520eb-2c4f-48c1-994c-61a8501880f8 | test | ACTIVE | 2026-01-09 01:38:24.823318 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:38:25.166423 | orchestrator | + osism manage compute list testbed-node-5 2026-01-09 01:38:27.978060 | orchestrator | +------+--------+----------+ 2026-01-09 01:38:27.978166 | orchestrator | | ID | Name | Status | 2026-01-09 01:38:27.978175 | orchestrator | |------+--------+----------| 2026-01-09 01:38:27.978181 | orchestrator | +------+--------+----------+ 2026-01-09 01:38:28.351888 | orchestrator | + server_ping 2026-01-09 01:38:28.353417 | orchestrator | ++ tr -d '\r' 2026-01-09 01:38:28.353486 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-09 01:38:31.279355 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:38:31.279440 | orchestrator | + ping -c3 192.168.112.163 2026-01-09 01:38:31.288676 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2026-01-09 01:38:31.288770 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=5.53 ms 2026-01-09 01:38:32.287114 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=1.93 ms 2026-01-09 01:38:33.288989 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=1.94 ms 2026-01-09 01:38:33.289073 | orchestrator | 2026-01-09 01:38:33.289081 | orchestrator | --- 192.168.112.163 ping statistics --- 2026-01-09 01:38:33.289087 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:38:33.289092 | orchestrator | rtt min/avg/max/mdev = 1.932/3.132/5.531/1.695 ms 2026-01-09 01:38:33.289097 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:38:33.289102 | orchestrator | + ping -c3 192.168.112.126 2026-01-09 01:38:33.300299 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2026-01-09 01:38:33.300395 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=5.65 ms 2026-01-09 01:38:34.298728 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=2.17 ms 2026-01-09 01:38:35.300577 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=1.78 ms 2026-01-09 01:38:35.300657 | orchestrator | 2026-01-09 01:38:35.300668 | orchestrator | --- 192.168.112.126 ping statistics --- 2026-01-09 01:38:35.300676 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:38:35.300683 | orchestrator | rtt min/avg/max/mdev = 1.783/3.199/5.645/1.736 ms 2026-01-09 01:38:35.300691 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:38:35.300699 | orchestrator | + ping -c3 192.168.112.193 2026-01-09 01:38:35.313118 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-01-09 01:38:35.313218 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=6.39 ms 2026-01-09 01:38:36.310660 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.41 ms 2026-01-09 01:38:37.311952 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.82 ms 2026-01-09 01:38:37.312043 | orchestrator | 2026-01-09 01:38:37.312053 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-01-09 01:38:37.312061 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-09 01:38:37.312068 | orchestrator | rtt min/avg/max/mdev = 1.821/3.539/6.386/2.027 ms 2026-01-09 01:38:37.315549 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:38:37.315662 | orchestrator | + ping -c3 192.168.112.185 2026-01-09 01:38:37.325256 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-01-09 01:38:37.325332 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=5.38 ms 2026-01-09 01:38:38.324240 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.54 ms 2026-01-09 01:38:39.324821 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.13 ms 2026-01-09 01:38:39.324918 | orchestrator | 2026-01-09 01:38:39.324926 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-01-09 01:38:39.324932 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:38:39.324937 | orchestrator | rtt min/avg/max/mdev = 2.133/3.352/5.379/1.443 ms 2026-01-09 01:38:39.325463 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:38:39.325480 | orchestrator | + ping -c3 192.168.112.132 2026-01-09 01:38:39.336697 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-01-09 01:38:39.336801 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=7.01 ms 2026-01-09 01:38:40.332007 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.00 ms 2026-01-09 01:38:41.333419 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=2.08 ms 2026-01-09 01:38:41.333536 | orchestrator | 2026-01-09 01:38:41.333585 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-01-09 01:38:41.333598 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-09 01:38:41.333608 | orchestrator | rtt min/avg/max/mdev = 1.996/3.695/7.014/2.346 ms 2026-01-09 01:38:41.334186 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-01-09 01:38:44.982243 | orchestrator | 2026-01-09 01:38:44 | INFO  | Live migrating server 1ae51e61-6e80-4da0-8970-0c18d56246a5 2026-01-09 01:38:55.202130 | orchestrator | 2026-01-09 01:38:55 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:38:57.757465 | orchestrator | 2026-01-09 01:38:57 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:39:00.097005 | orchestrator | 2026-01-09 01:39:00 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:39:02.441794 | orchestrator | 2026-01-09 01:39:02 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:39:04.800867 | orchestrator | 2026-01-09 01:39:04 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:39:07.086628 | orchestrator | 2026-01-09 01:39:07 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:39:09.456059 | orchestrator | 2026-01-09 01:39:09 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:39:11.744276 | orchestrator | 2026-01-09 01:39:11 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) is still in progress 2026-01-09 01:39:14.032611 | orchestrator | 2026-01-09 01:39:14 | INFO  | Live migration of 1ae51e61-6e80-4da0-8970-0c18d56246a5 (test-4) completed with status ACTIVE 2026-01-09 01:39:14.032707 | orchestrator | 2026-01-09 01:39:14 | INFO  | Live migrating server 925b95e3-480a-4a17-b850-0d7e2b6149b2 2026-01-09 01:39:24.458918 | orchestrator | 2026-01-09 01:39:24 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:39:26.828476 | orchestrator | 2026-01-09 01:39:26 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:39:29.196714 | orchestrator | 2026-01-09 01:39:29 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:39:31.481329 | orchestrator | 2026-01-09 01:39:31 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:39:33.829523 | orchestrator | 2026-01-09 01:39:33 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:39:36.245282 | orchestrator | 2026-01-09 01:39:36 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:39:38.591153 | orchestrator | 2026-01-09 01:39:38 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:39:40.864069 | orchestrator | 2026-01-09 01:39:40 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) is still in progress 2026-01-09 01:39:43.165107 | orchestrator | 2026-01-09 01:39:43 | INFO  | Live migration of 925b95e3-480a-4a17-b850-0d7e2b6149b2 (test-3) completed with status ACTIVE 2026-01-09 01:39:43.165213 | orchestrator | 2026-01-09 01:39:43 | INFO  | Live migrating server 95be74b0-5ba7-4871-a4cc-1f67a4cae876 2026-01-09 01:39:54.890693 | orchestrator | 2026-01-09 01:39:54 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:39:57.313889 | orchestrator | 2026-01-09 01:39:57 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:39:59.683457 | orchestrator | 2026-01-09 01:39:59 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:40:01.964904 | orchestrator | 2026-01-09 01:40:01 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:40:04.321170 | orchestrator | 2026-01-09 01:40:04 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:40:06.623764 | orchestrator | 2026-01-09 01:40:06 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:40:09.047910 | orchestrator | 2026-01-09 01:40:09 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:40:11.328410 | orchestrator | 2026-01-09 01:40:11 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) is still in progress 2026-01-09 01:40:13.621491 | orchestrator | 2026-01-09 01:40:13 | INFO  | Live migration of 95be74b0-5ba7-4871-a4cc-1f67a4cae876 (test-2) completed with status ACTIVE 2026-01-09 01:40:13.621591 | orchestrator | 2026-01-09 01:40:13 | INFO  | Live migrating server e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 2026-01-09 01:40:24.093150 | orchestrator | 2026-01-09 01:40:24 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:40:26.461404 | orchestrator | 2026-01-09 01:40:26 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:40:28.864434 | orchestrator | 2026-01-09 01:40:28 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:40:31.375541 | orchestrator | 2026-01-09 01:40:31 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:40:33.732126 | orchestrator | 2026-01-09 01:40:33 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:40:36.099987 | orchestrator | 2026-01-09 01:40:36 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:40:38.470267 | orchestrator | 2026-01-09 01:40:38 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:40:40.746362 | orchestrator | 2026-01-09 01:40:40 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) is still in progress 2026-01-09 01:40:43.050213 | orchestrator | 2026-01-09 01:40:43 | INFO  | Live migration of e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 (test-1) completed with status ACTIVE 2026-01-09 01:40:43.050337 | orchestrator | 2026-01-09 01:40:43 | INFO  | Live migrating server 0a7520eb-2c4f-48c1-994c-61a8501880f8 2026-01-09 01:40:53.394952 | orchestrator | 2026-01-09 01:40:53 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:40:55.766626 | orchestrator | 2026-01-09 01:40:55 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:40:58.116452 | orchestrator | 2026-01-09 01:40:58 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:41:00.473100 | orchestrator | 2026-01-09 01:41:00 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:41:02.753887 | orchestrator | 2026-01-09 01:41:02 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:41:05.055399 | orchestrator | 2026-01-09 01:41:05 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:41:07.419105 | orchestrator | 2026-01-09 01:41:07 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:41:09.727231 | orchestrator | 2026-01-09 01:41:09 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:41:12.002301 | orchestrator | 2026-01-09 01:41:12 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:41:14.311129 | orchestrator | 2026-01-09 01:41:14 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) is still in progress 2026-01-09 01:41:16.625283 | orchestrator | 2026-01-09 01:41:16 | INFO  | Live migration of 0a7520eb-2c4f-48c1-994c-61a8501880f8 (test) completed with status ACTIVE 2026-01-09 01:41:17.042794 | orchestrator | + compute_list 2026-01-09 01:41:17.042909 | orchestrator | + osism manage compute list testbed-node-3 2026-01-09 01:41:19.922387 | orchestrator | +------+--------+----------+ 2026-01-09 01:41:19.922509 | orchestrator | | ID | Name | Status | 2026-01-09 01:41:19.922525 | orchestrator | |------+--------+----------| 2026-01-09 01:41:19.922537 | orchestrator | +------+--------+----------+ 2026-01-09 01:41:20.332643 | orchestrator | + osism manage compute list testbed-node-4 2026-01-09 01:41:23.373855 | orchestrator | +------+--------+----------+ 2026-01-09 01:41:23.373980 | orchestrator | | ID | Name | Status | 2026-01-09 01:41:23.373994 | orchestrator | |------+--------+----------| 2026-01-09 01:41:23.374003 | orchestrator | +------+--------+----------+ 2026-01-09 01:41:23.770693 | orchestrator | + osism manage compute list testbed-node-5 2026-01-09 01:41:27.271045 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:41:27.271126 | orchestrator | | ID | Name | Status | 2026-01-09 01:41:27.271133 | orchestrator | |--------------------------------------+--------+----------| 2026-01-09 01:41:27.271137 | orchestrator | | 1ae51e61-6e80-4da0-8970-0c18d56246a5 | test-4 | ACTIVE | 2026-01-09 01:41:27.271141 | orchestrator | | 925b95e3-480a-4a17-b850-0d7e2b6149b2 | test-3 | ACTIVE | 2026-01-09 01:41:27.271146 | orchestrator | | 95be74b0-5ba7-4871-a4cc-1f67a4cae876 | test-2 | ACTIVE | 2026-01-09 01:41:27.271150 | orchestrator | | e2a01963-6fd5-405b-ab35-4fd6aca1b5b7 | test-1 | ACTIVE | 2026-01-09 01:41:27.271154 | orchestrator | | 0a7520eb-2c4f-48c1-994c-61a8501880f8 | test | ACTIVE | 2026-01-09 01:41:27.271158 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-09 01:41:27.679026 | orchestrator | + server_ping 2026-01-09 01:41:27.680742 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-09 01:41:27.681156 | orchestrator | ++ tr -d '\r' 2026-01-09 01:41:30.562449 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:41:30.562525 | orchestrator | + ping -c3 192.168.112.163 2026-01-09 01:41:30.571921 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2026-01-09 01:41:30.571989 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=5.78 ms 2026-01-09 01:41:31.571229 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=2.79 ms 2026-01-09 01:41:32.573243 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=2.14 ms 2026-01-09 01:41:32.573335 | orchestrator | 2026-01-09 01:41:32.573344 | orchestrator | --- 192.168.112.163 ping statistics --- 2026-01-09 01:41:32.573351 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-09 01:41:32.573356 | orchestrator | rtt min/avg/max/mdev = 2.144/3.569/5.775/1.581 ms 2026-01-09 01:41:32.573419 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:41:32.573430 | orchestrator | + ping -c3 192.168.112.126 2026-01-09 01:41:32.587025 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2026-01-09 01:41:32.587099 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=9.38 ms 2026-01-09 01:41:33.582954 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=3.17 ms 2026-01-09 01:41:34.582739 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=1.87 ms 2026-01-09 01:41:34.582889 | orchestrator | 2026-01-09 01:41:34.582901 | orchestrator | --- 192.168.112.126 ping statistics --- 2026-01-09 01:41:34.582929 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:41:34.582937 | orchestrator | rtt min/avg/max/mdev = 1.873/4.808/9.384/3.278 ms 2026-01-09 01:41:34.583384 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:41:34.583418 | orchestrator | + ping -c3 192.168.112.193 2026-01-09 01:41:34.596325 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-01-09 01:41:34.596408 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=7.87 ms 2026-01-09 01:41:35.592121 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=3.12 ms 2026-01-09 01:41:36.592830 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=2.37 ms 2026-01-09 01:41:36.592939 | orchestrator | 2026-01-09 01:41:36.592952 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-01-09 01:41:36.592963 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-09 01:41:36.592971 | orchestrator | rtt min/avg/max/mdev = 2.366/4.451/7.873/2.438 ms 2026-01-09 01:41:36.593446 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:41:36.593484 | orchestrator | + ping -c3 192.168.112.185 2026-01-09 01:41:36.606506 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-01-09 01:41:36.606669 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=7.37 ms 2026-01-09 01:41:37.603378 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.58 ms 2026-01-09 01:41:38.605937 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.23 ms 2026-01-09 01:41:38.606853 | orchestrator | 2026-01-09 01:41:38.606891 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-01-09 01:41:38.606901 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-09 01:41:38.606908 | orchestrator | rtt min/avg/max/mdev = 2.231/4.058/7.367/2.344 ms 2026-01-09 01:41:38.606916 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-09 01:41:38.606924 | orchestrator | + ping -c3 192.168.112.132 2026-01-09 01:41:38.617803 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2026-01-09 01:41:38.617876 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=6.16 ms 2026-01-09 01:41:39.615840 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.90 ms 2026-01-09 01:41:40.617041 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=2.44 ms 2026-01-09 01:41:40.617130 | orchestrator | 2026-01-09 01:41:40.617142 | orchestrator | --- 192.168.112.132 ping statistics --- 2026-01-09 01:41:40.617151 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-09 01:41:40.617159 | orchestrator | rtt min/avg/max/mdev = 2.441/3.835/6.160/1.654 ms 2026-01-09 01:41:40.858464 | orchestrator | ok: Runtime: 0:21:10.360185 2026-01-09 01:41:40.912327 | 2026-01-09 01:41:40.912479 | TASK [Run tempest] 2026-01-09 01:41:41.681355 | orchestrator | 2026-01-09 01:41:41.681548 | orchestrator | # Tempest 2026-01-09 01:41:41.681582 | orchestrator | 2026-01-09 01:41:41.681602 | orchestrator | + set -e 2026-01-09 01:41:41.681624 | orchestrator | + echo 2026-01-09 01:41:41.681645 | orchestrator | + echo '# Tempest' 2026-01-09 01:41:41.681674 | orchestrator | + echo 2026-01-09 01:41:41.681738 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-01-09 01:41:54.111900 | orchestrator | 2026-01-09 01:41:54 | INFO  | Task 99ee0be3-9be5-4882-9f02-0c7ce15c5ab6 (tempest) was prepared for execution. 2026-01-09 01:41:54.112041 | orchestrator | 2026-01-09 01:41:54 | INFO  | It takes a moment until task 99ee0be3-9be5-4882-9f02-0c7ce15c5ab6 (tempest) has been started and output is visible here. 2026-01-09 01:43:17.353265 | orchestrator | 2026-01-09 01:43:17.353401 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-01-09 01:43:17.354231 | orchestrator | 2026-01-09 01:43:17.354279 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-01-09 01:43:17.354305 | orchestrator | Friday 09 January 2026 01:41:58 +0000 (0:00:00.259) 0:00:00.259 ******** 2026-01-09 01:43:17.354319 | orchestrator | changed: [testbed-manager] 2026-01-09 01:43:17.354334 | orchestrator | 2026-01-09 01:43:17.354347 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-01-09 01:43:17.354363 | orchestrator | Friday 09 January 2026 01:41:59 +0000 (0:00:00.774) 0:00:01.034 ******** 2026-01-09 01:43:17.354375 | orchestrator | changed: [testbed-manager] 2026-01-09 01:43:17.354387 | orchestrator | 2026-01-09 01:43:17.354417 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-01-09 01:43:17.354430 | orchestrator | Friday 09 January 2026 01:42:00 +0000 (0:00:01.359) 0:00:02.393 ******** 2026-01-09 01:43:17.354443 | orchestrator | ok: [testbed-manager] 2026-01-09 01:43:17.354456 | orchestrator | 2026-01-09 01:43:17.354468 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-01-09 01:43:17.354481 | orchestrator | Friday 09 January 2026 01:42:01 +0000 (0:00:00.471) 0:00:02.865 ******** 2026-01-09 01:43:17.354494 | orchestrator | changed: [testbed-manager] 2026-01-09 01:43:17.354507 | orchestrator | 2026-01-09 01:43:17.354525 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-01-09 01:43:17.354539 | orchestrator | Friday 09 January 2026 01:42:25 +0000 (0:00:24.342) 0:00:27.207 ******** 2026-01-09 01:43:17.354552 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-01-09 01:43:17.354568 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-01-09 01:43:17.354583 | orchestrator | 2026-01-09 01:43:17.354595 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-01-09 01:43:17.354607 | orchestrator | Friday 09 January 2026 01:42:34 +0000 (0:00:08.463) 0:00:35.671 ******** 2026-01-09 01:43:17.354619 | orchestrator | ok: [testbed-manager] => { 2026-01-09 01:43:17.354632 | orchestrator |  "changed": false, 2026-01-09 01:43:17.354644 | orchestrator |  "msg": "All assertions passed" 2026-01-09 01:43:17.354656 | orchestrator | } 2026-01-09 01:43:17.354669 | orchestrator | 2026-01-09 01:43:17.354681 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-01-09 01:43:17.354693 | orchestrator | Friday 09 January 2026 01:42:34 +0000 (0:00:00.173) 0:00:35.845 ******** 2026-01-09 01:43:17.354758 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:43:17.354770 | orchestrator | 2026-01-09 01:43:17.354782 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-01-09 01:43:17.354804 | orchestrator | Friday 09 January 2026 01:42:38 +0000 (0:00:03.857) 0:00:39.702 ******** 2026-01-09 01:43:17.354815 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:43:17.354827 | orchestrator | 2026-01-09 01:43:17.354838 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-01-09 01:43:17.354849 | orchestrator | Friday 09 January 2026 01:42:40 +0000 (0:00:01.740) 0:00:41.443 ******** 2026-01-09 01:43:17.354861 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:43:17.354872 | orchestrator | 2026-01-09 01:43:17.354883 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-01-09 01:43:17.354982 | orchestrator | Friday 09 January 2026 01:42:43 +0000 (0:00:03.792) 0:00:45.236 ******** 2026-01-09 01:43:17.354997 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:43:17.355008 | orchestrator | 2026-01-09 01:43:17.355020 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-01-09 01:43:17.355032 | orchestrator | Friday 09 January 2026 01:42:44 +0000 (0:00:00.206) 0:00:45.442 ******** 2026-01-09 01:43:17.355044 | orchestrator | changed: [testbed-manager] 2026-01-09 01:43:17.355056 | orchestrator | 2026-01-09 01:43:17.355068 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-01-09 01:43:17.355080 | orchestrator | Friday 09 January 2026 01:42:46 +0000 (0:00:02.757) 0:00:48.200 ******** 2026-01-09 01:43:17.355091 | orchestrator | changed: [testbed-manager] 2026-01-09 01:43:17.355103 | orchestrator | 2026-01-09 01:43:17.355115 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-01-09 01:43:17.355127 | orchestrator | Friday 09 January 2026 01:42:57 +0000 (0:00:10.589) 0:00:58.789 ******** 2026-01-09 01:43:17.355138 | orchestrator | changed: [testbed-manager] 2026-01-09 01:43:17.355149 | orchestrator | 2026-01-09 01:43:17.355161 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-01-09 01:43:17.355173 | orchestrator | Friday 09 January 2026 01:42:58 +0000 (0:00:00.750) 0:00:59.540 ******** 2026-01-09 01:43:17.355184 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:43:17.355196 | orchestrator | 2026-01-09 01:43:17.355207 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-01-09 01:43:17.355219 | orchestrator | Friday 09 January 2026 01:42:59 +0000 (0:00:01.591) 0:01:01.132 ******** 2026-01-09 01:43:17.355230 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:43:17.355242 | orchestrator | 2026-01-09 01:43:17.355254 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-01-09 01:43:17.355266 | orchestrator | Friday 09 January 2026 01:43:01 +0000 (0:00:01.630) 0:01:02.762 ******** 2026-01-09 01:43:17.355279 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:43:17.355292 | orchestrator | 2026-01-09 01:43:17.355303 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-01-09 01:43:17.355315 | orchestrator | Friday 09 January 2026 01:43:01 +0000 (0:00:00.190) 0:01:02.953 ******** 2026-01-09 01:43:17.355328 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:43:17.355340 | orchestrator | 2026-01-09 01:43:17.355351 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-01-09 01:43:17.355359 | orchestrator | Friday 09 January 2026 01:43:01 +0000 (0:00:00.177) 0:01:03.131 ******** 2026-01-09 01:43:17.355366 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-09 01:43:17.355373 | orchestrator | 2026-01-09 01:43:17.355381 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-01-09 01:43:17.355411 | orchestrator | Friday 09 January 2026 01:43:05 +0000 (0:00:04.003) 0:01:07.134 ******** 2026-01-09 01:43:17.355425 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-01-09 01:43:17.355437 | orchestrator |  "changed": false, 2026-01-09 01:43:17.355448 | orchestrator |  "msg": "All assertions passed" 2026-01-09 01:43:17.355460 | orchestrator | } 2026-01-09 01:43:17.355471 | orchestrator | 2026-01-09 01:43:17.355483 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-01-09 01:43:17.355495 | orchestrator | Friday 09 January 2026 01:43:05 +0000 (0:00:00.199) 0:01:07.333 ******** 2026-01-09 01:43:17.355507 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-01-09 01:43:17.355527 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-01-09 01:43:17.355538 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:43:17.355549 | orchestrator | 2026-01-09 01:43:17.355559 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-01-09 01:43:17.355570 | orchestrator | Friday 09 January 2026 01:43:06 +0000 (0:00:00.415) 0:01:07.749 ******** 2026-01-09 01:43:17.355664 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:43:17.355677 | orchestrator | 2026-01-09 01:43:17.355688 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-01-09 01:43:17.355757 | orchestrator | Friday 09 January 2026 01:43:06 +0000 (0:00:00.164) 0:01:07.913 ******** 2026-01-09 01:43:17.355771 | orchestrator | ok: [testbed-manager] 2026-01-09 01:43:17.355783 | orchestrator | 2026-01-09 01:43:17.355794 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-01-09 01:43:17.355805 | orchestrator | Friday 09 January 2026 01:43:06 +0000 (0:00:00.475) 0:01:08.389 ******** 2026-01-09 01:43:17.355817 | orchestrator | changed: [testbed-manager] 2026-01-09 01:43:17.355828 | orchestrator | 2026-01-09 01:43:17.355840 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-01-09 01:43:17.355851 | orchestrator | Friday 09 January 2026 01:43:07 +0000 (0:00:00.915) 0:01:09.304 ******** 2026-01-09 01:43:17.355862 | orchestrator | ok: [testbed-manager] 2026-01-09 01:43:17.355873 | orchestrator | 2026-01-09 01:43:17.355884 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-01-09 01:43:17.355896 | orchestrator | Friday 09 January 2026 01:43:08 +0000 (0:00:00.490) 0:01:09.795 ******** 2026-01-09 01:43:17.355907 | orchestrator | skipping: [testbed-manager] 2026-01-09 01:43:17.355918 | orchestrator | 2026-01-09 01:43:17.355929 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-01-09 01:43:17.355940 | orchestrator | Friday 09 January 2026 01:43:08 +0000 (0:00:00.140) 0:01:09.935 ******** 2026-01-09 01:43:17.355951 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-01-09 01:43:17.355963 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-01-09 01:43:17.355974 | orchestrator | 2026-01-09 01:43:17.355987 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-01-09 01:43:17.355998 | orchestrator | Friday 09 January 2026 01:43:16 +0000 (0:00:07.801) 0:01:17.737 ******** 2026-01-09 01:43:17.356009 | orchestrator | changed: [testbed-manager] 2026-01-09 01:43:17.356019 | orchestrator | 2026-01-09 01:43:17.356029 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-09 01:43:17.356041 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-09 01:43:17.356053 | orchestrator | 2026-01-09 01:43:17.356064 | orchestrator | 2026-01-09 01:43:17.356074 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-09 01:43:17.356086 | orchestrator | Friday 09 January 2026 01:43:17 +0000 (0:00:01.032) 0:01:18.769 ******** 2026-01-09 01:43:17.356096 | orchestrator | =============================================================================== 2026-01-09 01:43:17.356106 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 24.34s 2026-01-09 01:43:17.356118 | orchestrator | osism.validations.tempest : Install qemu-utils package ----------------- 10.59s 2026-01-09 01:43:17.356128 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.46s 2026-01-09 01:43:17.356138 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.80s 2026-01-09 01:43:17.356149 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 4.00s 2026-01-09 01:43:17.356159 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.86s 2026-01-09 01:43:17.356169 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.79s 2026-01-09 01:43:17.356180 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.76s 2026-01-09 01:43:17.356190 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.74s 2026-01-09 01:43:17.356200 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.63s 2026-01-09 01:43:17.356210 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.59s 2026-01-09 01:43:17.356231 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.36s 2026-01-09 01:43:17.356241 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.03s 2026-01-09 01:43:17.356252 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.92s 2026-01-09 01:43:17.356268 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 0.77s 2026-01-09 01:43:17.356278 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.75s 2026-01-09 01:43:17.356289 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.49s 2026-01-09 01:43:17.356310 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.48s 2026-01-09 01:43:17.759933 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.47s 2026-01-09 01:43:17.760021 | orchestrator | osism.validations.tempest : Resolve flavor IDs -------------------------- 0.42s 2026-01-09 01:43:18.132595 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-01-09 01:43:18.135957 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-01-09 01:43:18.139308 | orchestrator | 2026-01-09 01:43:18.139366 | orchestrator | ## IDENTITY (API) 2026-01-09 01:43:18.139373 | orchestrator | 2026-01-09 01:43:18.139378 | orchestrator | + echo 2026-01-09 01:43:18.139384 | orchestrator | + echo '## IDENTITY (API)' 2026-01-09 01:43:18.139394 | orchestrator | + echo 2026-01-09 01:43:18.139403 | orchestrator | + _tempest tempest.api.identity.v3 2026-01-09 01:43:18.139423 | orchestrator | + local regex=tempest.api.identity.v3 2026-01-09 01:43:18.139634 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-01-09 01:43:18.141045 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-09 01:43:18.145084 | orchestrator | + tee -a /opt/tempest/20260109-0143.log 2026-01-09 01:43:22.416156 | orchestrator | 2026-01-09 01:43:22.415 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-09 01:43:22.518963 | orchestrator | 2026-01-09 01:43:22.518 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:43:22.519074 | orchestrator | 2026-01-09 01:43:22.518 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:43:22.519088 | orchestrator | 2026-01-09 01:43:22.519 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:43:22.519110 | orchestrator | 2026-01-09 01:43:22.519 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:22.519621 | orchestrator | 2026-01-09 01:43:22.519 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:43:22.519643 | orchestrator | 2026-01-09 01:43:22.520 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:43:22.519835 | orchestrator | 2026-01-09 01:43:22.520 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:43:22.520337 | orchestrator | 2026-01-09 01:43:22.520 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:43:22.520633 | orchestrator | 2026-01-09 01:43:22.520 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:43:22.520812 | orchestrator | 2026-01-09 01:43:22.521 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:43:22.522926 | orchestrator | 2026-01-09 01:43:22.521 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:43:22.523009 | orchestrator | 2026-01-09 01:43:22.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:43:22.523108 | orchestrator | 2026-01-09 01:43:22.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:43:22.523124 | orchestrator | 2026-01-09 01:43:22.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:43:22.523136 | orchestrator | 2026-01-09 01:43:22.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:22.523148 | orchestrator | 2026-01-09 01:43:22.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:43:22.523159 | orchestrator | 2026-01-09 01:43:22.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:43:22.523170 | orchestrator | 2026-01-09 01:43:22.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:43:22.523197 | orchestrator | 2026-01-09 01:43:22.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:43:22.523208 | orchestrator | 2026-01-09 01:43:22.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:43:22.523219 | orchestrator | 2026-01-09 01:43:22.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:43:22.523230 | orchestrator | 2026-01-09 01:43:22.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:43:36.638241 | orchestrator | 2026-01-09 01:43:36.638349 | orchestrator | ========================= 2026-01-09 01:43:36.638363 | orchestrator | Failures during discovery 2026-01-09 01:43:36.638371 | orchestrator | ========================= 2026-01-09 01:43:36.638379 | orchestrator | --- stdout --- 2026-01-09 01:43:36.638388 | orchestrator | 2026-01-09 01:43:26.222 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-09 01:43:36.638397 | orchestrator | 2026-01-09 01:43:26.223 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:43:36.638420 | orchestrator | 2026-01-09 01:43:26.224 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:43:36.638436 | orchestrator | 2026-01-09 01:43:26.224 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:43:36.638444 | orchestrator | 2026-01-09 01:43:26.224 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:36.638452 | orchestrator | 2026-01-09 01:43:26.224 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:43:36.638460 | orchestrator | 2026-01-09 01:43:26.224 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:43:36.638470 | orchestrator | 2026-01-09 01:43:26.225 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:43:36.638478 | orchestrator | 2026-01-09 01:43:26.225 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:43:36.638485 | orchestrator | 2026-01-09 01:43:26.225 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:43:36.638493 | orchestrator | 2026-01-09 01:43:26.225 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:43:36.638500 | orchestrator | 2026-01-09 01:43:26.226 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:43:36.638507 | orchestrator | 2026-01-09 01:43:26.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:43:36.638540 | orchestrator | 2026-01-09 01:43:26.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:43:36.638548 | orchestrator | 2026-01-09 01:43:26.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:43:36.638555 | orchestrator | 2026-01-09 01:43:26.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:36.638563 | orchestrator | 2026-01-09 01:43:26.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:43:36.638571 | orchestrator | 2026-01-09 01:43:26.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:43:36.638578 | orchestrator | 2026-01-09 01:43:26.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:43:36.638585 | orchestrator | 2026-01-09 01:43:26.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:43:36.638605 | orchestrator | 2026-01-09 01:43:26.227 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:43:36.638613 | orchestrator | 2026-01-09 01:43:26.227 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:43:36.638620 | orchestrator | 2026-01-09 01:43:26.227 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:43:36.638636 | orchestrator | 2026-01-09 01:43:26.229 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-09 01:43:36.638650 | orchestrator | 2026-01-09 01:43:27.059 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-09 01:43:36.638657 | orchestrator | 2026-01-09 01:43:27.059 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-09 01:43:36.638665 | orchestrator | 2026-01-09 01:43:27.059 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-09 01:43:36.638672 | orchestrator | 2026-01-09 01:43:27.059 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:36.638739 | orchestrator | 2026-01-09 01:43:27.059 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-09 01:43:36.638754 | orchestrator | 2026-01-09 01:43:27.060 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-09 01:43:36.638767 | orchestrator | 2026-01-09 01:43:27.060 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-09 01:43:36.638780 | orchestrator | 2026-01-09 01:43:27.060 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-09 01:43:36.638793 | orchestrator | 2026-01-09 01:43:27.060 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-09 01:43:36.638806 | orchestrator | 2026-01-09 01:43:27.060 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-09 01:43:36.638818 | orchestrator | 2026-01-09 01:43:27.060 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-09 01:43:36.638829 | orchestrator | --- import errors --- 2026-01-09 01:43:36.638837 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-09 01:43:36.638845 | orchestrator | Traceback (most recent call last): 2026-01-09 01:43:36.638853 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-09 01:43:36.638860 | orchestrator | module = self._get_module_from_name(name) 2026-01-09 01:43:36.638868 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-09 01:43:36.638885 | orchestrator | __import__(name) 2026-01-09 01:43:36.638892 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-09 01:43:36.638900 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-09 01:43:36.638909 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-09 01:43:36.638918 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-09 01:43:36.638927 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-09 01:43:36.638936 | orchestrator | 2026-01-09 01:43:36.638944 | orchestrator | ================================================================================ 2026-01-09 01:43:36.638953 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-09 01:43:37.094543 | orchestrator | 2026-01-09 01:43:37.094668 | orchestrator | + echo 2026-01-09 01:43:37.094690 | orchestrator | ## IMAGE (API) 2026-01-09 01:43:37.094773 | orchestrator | 2026-01-09 01:43:37.094790 | orchestrator | + echo '## IMAGE (API)' 2026-01-09 01:43:37.094813 | orchestrator | + echo 2026-01-09 01:43:37.094832 | orchestrator | + _tempest tempest.api.image.v2 2026-01-09 01:43:37.094849 | orchestrator | + local regex=tempest.api.image.v2 2026-01-09 01:43:37.095900 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-01-09 01:43:37.096515 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-09 01:43:37.101464 | orchestrator | + tee -a /opt/tempest/20260109-0143.log 2026-01-09 01:43:41.091150 | orchestrator | 2026-01-09 01:43:41.090 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-09 01:43:41.189358 | orchestrator | 2026-01-09 01:43:41.188 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:43:41.189444 | orchestrator | 2026-01-09 01:43:41.188 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:43:41.189455 | orchestrator | 2026-01-09 01:43:41.189 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:43:41.189463 | orchestrator | 2026-01-09 01:43:41.189 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:41.189471 | orchestrator | 2026-01-09 01:43:41.189 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:43:41.189501 | orchestrator | 2026-01-09 01:43:41.189 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:43:41.189509 | orchestrator | 2026-01-09 01:43:41.189 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:43:41.189557 | orchestrator | 2026-01-09 01:43:41.190 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:43:41.189566 | orchestrator | 2026-01-09 01:43:41.190 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:43:41.190185 | orchestrator | 2026-01-09 01:43:41.190 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:43:41.190247 | orchestrator | 2026-01-09 01:43:41.190 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:43:41.190626 | orchestrator | 2026-01-09 01:43:41.191 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:43:41.190651 | orchestrator | 2026-01-09 01:43:41.191 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:43:41.190657 | orchestrator | 2026-01-09 01:43:41.191 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:43:41.190748 | orchestrator | 2026-01-09 01:43:41.191 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:41.190758 | orchestrator | 2026-01-09 01:43:41.191 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:43:41.190894 | orchestrator | 2026-01-09 01:43:41.191 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:43:41.190904 | orchestrator | 2026-01-09 01:43:41.191 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:43:41.190909 | orchestrator | 2026-01-09 01:43:41.191 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:43:41.191224 | orchestrator | 2026-01-09 01:43:41.191 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:43:41.191235 | orchestrator | 2026-01-09 01:43:41.191 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:43:41.191240 | orchestrator | 2026-01-09 01:43:41.191 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:43:53.943770 | orchestrator | 2026-01-09 01:43:53.943852 | orchestrator | ========================= 2026-01-09 01:43:53.943860 | orchestrator | Failures during discovery 2026-01-09 01:43:53.943864 | orchestrator | ========================= 2026-01-09 01:43:53.943869 | orchestrator | --- stdout --- 2026-01-09 01:43:53.943875 | orchestrator | 2026-01-09 01:43:44.766 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-09 01:43:53.943884 | orchestrator | 2026-01-09 01:43:44.768 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:43:53.943890 | orchestrator | 2026-01-09 01:43:44.768 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:43:53.943894 | orchestrator | 2026-01-09 01:43:44.768 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:43:53.943899 | orchestrator | 2026-01-09 01:43:44.768 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:53.943903 | orchestrator | 2026-01-09 01:43:44.768 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:43:53.943907 | orchestrator | 2026-01-09 01:43:44.769 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:43:53.943911 | orchestrator | 2026-01-09 01:43:44.769 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:43:53.943915 | orchestrator | 2026-01-09 01:43:44.769 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:43:53.943919 | orchestrator | 2026-01-09 01:43:44.769 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:43:53.943923 | orchestrator | 2026-01-09 01:43:44.770 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:43:53.943927 | orchestrator | 2026-01-09 01:43:44.770 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:43:53.943931 | orchestrator | 2026-01-09 01:43:44.770 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:43:53.943935 | orchestrator | 2026-01-09 01:43:44.770 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:43:53.943939 | orchestrator | 2026-01-09 01:43:44.770 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:43:53.943960 | orchestrator | 2026-01-09 01:43:44.770 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:53.943965 | orchestrator | 2026-01-09 01:43:44.770 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:43:53.943969 | orchestrator | 2026-01-09 01:43:44.771 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:43:53.943973 | orchestrator | 2026-01-09 01:43:44.771 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:43:53.943977 | orchestrator | 2026-01-09 01:43:44.771 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:43:53.943981 | orchestrator | 2026-01-09 01:43:44.771 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:43:53.943985 | orchestrator | 2026-01-09 01:43:44.771 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:43:53.944000 | orchestrator | 2026-01-09 01:43:44.771 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:43:53.944006 | orchestrator | 2026-01-09 01:43:44.773 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-09 01:43:53.944012 | orchestrator | 2026-01-09 01:43:45.614 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-09 01:43:53.944016 | orchestrator | 2026-01-09 01:43:45.614 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-09 01:43:53.944020 | orchestrator | 2026-01-09 01:43:45.615 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-09 01:43:53.944024 | orchestrator | 2026-01-09 01:43:45.615 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:53.944037 | orchestrator | 2026-01-09 01:43:45.615 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-09 01:43:53.944041 | orchestrator | 2026-01-09 01:43:45.615 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-09 01:43:53.944045 | orchestrator | 2026-01-09 01:43:45.615 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-09 01:43:53.944049 | orchestrator | 2026-01-09 01:43:45.615 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-09 01:43:53.944052 | orchestrator | 2026-01-09 01:43:45.615 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-09 01:43:53.944056 | orchestrator | 2026-01-09 01:43:45.615 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-09 01:43:53.944060 | orchestrator | 2026-01-09 01:43:45.615 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-09 01:43:53.944064 | orchestrator | --- import errors --- 2026-01-09 01:43:53.944068 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-09 01:43:53.944072 | orchestrator | Traceback (most recent call last): 2026-01-09 01:43:53.944076 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-09 01:43:53.944080 | orchestrator | module = self._get_module_from_name(name) 2026-01-09 01:43:53.944084 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-09 01:43:53.944088 | orchestrator | __import__(name) 2026-01-09 01:43:53.944092 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-09 01:43:53.944096 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-09 01:43:53.944100 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-09 01:43:53.944107 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-09 01:43:53.944111 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-09 01:43:53.944115 | orchestrator | 2026-01-09 01:43:53.944119 | orchestrator | ================================================================================ 2026-01-09 01:43:53.944123 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-09 01:43:54.262484 | orchestrator | 2026-01-09 01:43:54.262564 | orchestrator | ## NETWORK (API) 2026-01-09 01:43:54.262575 | orchestrator | 2026-01-09 01:43:54.262584 | orchestrator | + echo 2026-01-09 01:43:54.262592 | orchestrator | + echo '## NETWORK (API)' 2026-01-09 01:43:54.262602 | orchestrator | + echo 2026-01-09 01:43:54.262610 | orchestrator | + _tempest tempest.api.network 2026-01-09 01:43:54.262619 | orchestrator | + local regex=tempest.api.network 2026-01-09 01:43:54.263604 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-01-09 01:43:54.263942 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-09 01:43:54.268800 | orchestrator | + tee -a /opt/tempest/20260109-0143.log 2026-01-09 01:43:58.054567 | orchestrator | 2026-01-09 01:43:58.053 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-09 01:43:58.184234 | orchestrator | 2026-01-09 01:43:58.183 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:43:58.184380 | orchestrator | 2026-01-09 01:43:58.184 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:43:58.184528 | orchestrator | 2026-01-09 01:43:58.184 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:43:58.184569 | orchestrator | 2026-01-09 01:43:58.185 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:58.185918 | orchestrator | 2026-01-09 01:43:58.185 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:43:58.185962 | orchestrator | 2026-01-09 01:43:58.185 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:43:58.185971 | orchestrator | 2026-01-09 01:43:58.185 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:43:58.185979 | orchestrator | 2026-01-09 01:43:58.186 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:43:58.186143 | orchestrator | 2026-01-09 01:43:58.186 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:43:58.186951 | orchestrator | 2026-01-09 01:43:58.187 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:43:58.189852 | orchestrator | 2026-01-09 01:43:58.187 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:43:58.189928 | orchestrator | 2026-01-09 01:43:58.188 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:43:58.189946 | orchestrator | 2026-01-09 01:43:58.188 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:43:58.189963 | orchestrator | 2026-01-09 01:43:58.188 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:43:58.189978 | orchestrator | 2026-01-09 01:43:58.188 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:43:58.189994 | orchestrator | 2026-01-09 01:43:58.188 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:43:58.190098 | orchestrator | 2026-01-09 01:43:58.188 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:43:58.190120 | orchestrator | 2026-01-09 01:43:58.188 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:43:58.190134 | orchestrator | 2026-01-09 01:43:58.188 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:43:58.190164 | orchestrator | 2026-01-09 01:43:58.188 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:43:58.190181 | orchestrator | 2026-01-09 01:43:58.188 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:43:58.190196 | orchestrator | 2026-01-09 01:43:58.188 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:44:10.370955 | orchestrator | 2026-01-09 01:44:10.371090 | orchestrator | ========================= 2026-01-09 01:44:10.371108 | orchestrator | Failures during discovery 2026-01-09 01:44:10.371120 | orchestrator | ========================= 2026-01-09 01:44:10.371132 | orchestrator | --- stdout --- 2026-01-09 01:44:10.371158 | orchestrator | 2026-01-09 01:44:01.770 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-09 01:44:10.371171 | orchestrator | 2026-01-09 01:44:01.771 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:44:10.371188 | orchestrator | 2026-01-09 01:44:01.772 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:44:10.371207 | orchestrator | 2026-01-09 01:44:01.772 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:44:10.371249 | orchestrator | 2026-01-09 01:44:01.772 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:10.371269 | orchestrator | 2026-01-09 01:44:01.772 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:44:10.371287 | orchestrator | 2026-01-09 01:44:01.772 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:44:10.371305 | orchestrator | 2026-01-09 01:44:01.773 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:44:10.371326 | orchestrator | 2026-01-09 01:44:01.773 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:44:10.371347 | orchestrator | 2026-01-09 01:44:01.773 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:44:10.371367 | orchestrator | 2026-01-09 01:44:01.773 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:44:10.371386 | orchestrator | 2026-01-09 01:44:01.774 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:44:10.371399 | orchestrator | 2026-01-09 01:44:01.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:44:10.371412 | orchestrator | 2026-01-09 01:44:01.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:44:10.371425 | orchestrator | 2026-01-09 01:44:01.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:44:10.371437 | orchestrator | 2026-01-09 01:44:01.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:10.371453 | orchestrator | 2026-01-09 01:44:01.774 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:44:10.371489 | orchestrator | 2026-01-09 01:44:01.775 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:44:10.371503 | orchestrator | 2026-01-09 01:44:01.775 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:44:10.371544 | orchestrator | 2026-01-09 01:44:01.775 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:44:10.371558 | orchestrator | 2026-01-09 01:44:01.775 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:44:10.371571 | orchestrator | 2026-01-09 01:44:01.775 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:44:10.371584 | orchestrator | 2026-01-09 01:44:01.775 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:44:10.371600 | orchestrator | 2026-01-09 01:44:01.777 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-09 01:44:10.371613 | orchestrator | 2026-01-09 01:44:02.607 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-09 01:44:10.371624 | orchestrator | 2026-01-09 01:44:02.607 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-09 01:44:10.371635 | orchestrator | 2026-01-09 01:44:02.607 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-09 01:44:10.371646 | orchestrator | 2026-01-09 01:44:02.608 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:10.371676 | orchestrator | 2026-01-09 01:44:02.608 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-09 01:44:10.371688 | orchestrator | 2026-01-09 01:44:02.608 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-09 01:44:10.371729 | orchestrator | 2026-01-09 01:44:02.608 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-09 01:44:10.371741 | orchestrator | 2026-01-09 01:44:02.608 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-09 01:44:10.371752 | orchestrator | 2026-01-09 01:44:02.608 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-09 01:44:10.371763 | orchestrator | 2026-01-09 01:44:02.608 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-09 01:44:10.371780 | orchestrator | 2026-01-09 01:44:02.608 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-09 01:44:10.371791 | orchestrator | --- import errors --- 2026-01-09 01:44:10.371802 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-09 01:44:10.371813 | orchestrator | Traceback (most recent call last): 2026-01-09 01:44:10.371825 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-09 01:44:10.371836 | orchestrator | module = self._get_module_from_name(name) 2026-01-09 01:44:10.371847 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-09 01:44:10.371858 | orchestrator | __import__(name) 2026-01-09 01:44:10.371869 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-09 01:44:10.371880 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-09 01:44:10.371891 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-09 01:44:10.371902 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-09 01:44:10.371913 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-09 01:44:10.371923 | orchestrator | 2026-01-09 01:44:10.371934 | orchestrator | ================================================================================ 2026-01-09 01:44:10.372016 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-09 01:44:10.718399 | orchestrator | 2026-01-09 01:44:10.718499 | orchestrator | ## VOLUME (API) 2026-01-09 01:44:10.718513 | orchestrator | 2026-01-09 01:44:10.718524 | orchestrator | + echo 2026-01-09 01:44:10.718534 | orchestrator | + echo '## VOLUME (API)' 2026-01-09 01:44:10.718545 | orchestrator | + echo 2026-01-09 01:44:10.718555 | orchestrator | + _tempest tempest.api.volume 2026-01-09 01:44:10.718565 | orchestrator | + local regex=tempest.api.volume 2026-01-09 01:44:10.718588 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-01-09 01:44:10.719761 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-09 01:44:10.726586 | orchestrator | + tee -a /opt/tempest/20260109-0144.log 2026-01-09 01:44:14.544918 | orchestrator | 2026-01-09 01:44:14.544 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-09 01:44:14.642332 | orchestrator | 2026-01-09 01:44:14.641 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:44:14.642494 | orchestrator | 2026-01-09 01:44:14.641 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:44:14.642512 | orchestrator | 2026-01-09 01:44:14.642 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:44:14.642524 | orchestrator | 2026-01-09 01:44:14.642 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:14.642536 | orchestrator | 2026-01-09 01:44:14.642 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:44:14.642547 | orchestrator | 2026-01-09 01:44:14.642 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:44:14.642559 | orchestrator | 2026-01-09 01:44:14.642 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:44:14.642583 | orchestrator | 2026-01-09 01:44:14.643 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:44:14.642595 | orchestrator | 2026-01-09 01:44:14.643 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:44:14.643021 | orchestrator | 2026-01-09 01:44:14.643 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:44:14.643143 | orchestrator | 2026-01-09 01:44:14.643 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:44:14.643447 | orchestrator | 2026-01-09 01:44:14.644 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:44:14.643464 | orchestrator | 2026-01-09 01:44:14.644 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:44:14.643480 | orchestrator | 2026-01-09 01:44:14.644 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:44:14.643848 | orchestrator | 2026-01-09 01:44:14.644 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:14.643938 | orchestrator | 2026-01-09 01:44:14.644 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:44:14.643955 | orchestrator | 2026-01-09 01:44:14.644 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:44:14.643968 | orchestrator | 2026-01-09 01:44:14.644 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:44:14.644027 | orchestrator | 2026-01-09 01:44:14.644 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:44:14.644040 | orchestrator | 2026-01-09 01:44:14.644 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:44:14.644052 | orchestrator | 2026-01-09 01:44:14.644 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:44:14.644081 | orchestrator | 2026-01-09 01:44:14.644 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:44:28.263503 | orchestrator | 2026-01-09 01:44:28.266531 | orchestrator | ========================= 2026-01-09 01:44:28.266579 | orchestrator | Failures during discovery 2026-01-09 01:44:28.266590 | orchestrator | ========================= 2026-01-09 01:44:28.266600 | orchestrator | --- stdout --- 2026-01-09 01:44:28.266612 | orchestrator | 2026-01-09 01:44:18.276 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-09 01:44:28.266623 | orchestrator | 2026-01-09 01:44:18.277 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:44:28.266636 | orchestrator | 2026-01-09 01:44:18.277 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:44:28.266646 | orchestrator | 2026-01-09 01:44:18.278 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:44:28.266656 | orchestrator | 2026-01-09 01:44:18.278 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:28.266666 | orchestrator | 2026-01-09 01:44:18.278 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:44:28.266675 | orchestrator | 2026-01-09 01:44:18.278 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:44:28.266685 | orchestrator | 2026-01-09 01:44:18.278 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:44:28.266694 | orchestrator | 2026-01-09 01:44:18.279 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:44:28.266733 | orchestrator | 2026-01-09 01:44:18.279 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:44:28.266750 | orchestrator | 2026-01-09 01:44:18.279 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:44:28.266765 | orchestrator | 2026-01-09 01:44:18.279 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:44:28.266781 | orchestrator | 2026-01-09 01:44:18.280 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:44:28.266798 | orchestrator | 2026-01-09 01:44:18.280 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:44:28.266816 | orchestrator | 2026-01-09 01:44:18.280 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:44:28.266833 | orchestrator | 2026-01-09 01:44:18.280 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:28.266851 | orchestrator | 2026-01-09 01:44:18.280 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:44:28.266866 | orchestrator | 2026-01-09 01:44:18.280 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:44:28.266876 | orchestrator | 2026-01-09 01:44:18.280 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:44:28.266918 | orchestrator | 2026-01-09 01:44:18.280 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:44:28.266929 | orchestrator | 2026-01-09 01:44:18.280 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:44:28.266939 | orchestrator | 2026-01-09 01:44:18.280 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:44:28.266949 | orchestrator | 2026-01-09 01:44:18.280 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:44:28.266961 | orchestrator | 2026-01-09 01:44:18.283 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-09 01:44:28.266974 | orchestrator | 2026-01-09 01:44:19.106 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-09 01:44:28.266984 | orchestrator | 2026-01-09 01:44:19.106 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-09 01:44:28.266994 | orchestrator | 2026-01-09 01:44:19.106 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-09 01:44:28.267003 | orchestrator | 2026-01-09 01:44:19.107 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:28.267042 | orchestrator | 2026-01-09 01:44:19.107 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-09 01:44:28.267053 | orchestrator | 2026-01-09 01:44:19.107 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-09 01:44:28.267062 | orchestrator | 2026-01-09 01:44:19.107 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-09 01:44:28.267072 | orchestrator | 2026-01-09 01:44:19.107 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-09 01:44:28.267081 | orchestrator | 2026-01-09 01:44:19.107 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-09 01:44:28.267091 | orchestrator | 2026-01-09 01:44:19.107 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-09 01:44:28.267100 | orchestrator | 2026-01-09 01:44:19.107 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-09 01:44:28.267110 | orchestrator | --- import errors --- 2026-01-09 01:44:28.267120 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-09 01:44:28.267130 | orchestrator | Traceback (most recent call last): 2026-01-09 01:44:28.267168 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-09 01:44:28.267178 | orchestrator | module = self._get_module_from_name(name) 2026-01-09 01:44:28.267206 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-09 01:44:28.267216 | orchestrator | __import__(name) 2026-01-09 01:44:28.267226 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-09 01:44:28.267240 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-09 01:44:28.267250 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-09 01:44:28.267260 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-09 01:44:28.267270 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-09 01:44:28.267280 | orchestrator | 2026-01-09 01:44:28.267295 | orchestrator | ================================================================================ 2026-01-09 01:44:28.267305 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-09 01:44:28.725434 | orchestrator | 2026-01-09 01:44:28.725521 | orchestrator | ## COMPUTE (API) 2026-01-09 01:44:28.725530 | orchestrator | 2026-01-09 01:44:28.725537 | orchestrator | + echo 2026-01-09 01:44:28.725544 | orchestrator | + echo '## COMPUTE (API)' 2026-01-09 01:44:28.725576 | orchestrator | + echo 2026-01-09 01:44:28.725584 | orchestrator | + _tempest tempest.api.compute 2026-01-09 01:44:28.725590 | orchestrator | + local regex=tempest.api.compute 2026-01-09 01:44:28.727679 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-01-09 01:44:28.731165 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-09 01:44:28.734280 | orchestrator | + tee -a /opt/tempest/20260109-0144.log 2026-01-09 01:44:32.638014 | orchestrator | 2026-01-09 01:44:32.635 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-09 01:44:32.733910 | orchestrator | 2026-01-09 01:44:32.732 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:44:32.734061 | orchestrator | 2026-01-09 01:44:32.732 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:44:32.734077 | orchestrator | 2026-01-09 01:44:32.733 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:44:32.734085 | orchestrator | 2026-01-09 01:44:32.733 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:32.734092 | orchestrator | 2026-01-09 01:44:32.733 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:44:32.734099 | orchestrator | 2026-01-09 01:44:32.734 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:44:32.734105 | orchestrator | 2026-01-09 01:44:32.734 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:44:32.734123 | orchestrator | 2026-01-09 01:44:32.734 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:44:32.734170 | orchestrator | 2026-01-09 01:44:32.734 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:44:32.735356 | orchestrator | 2026-01-09 01:44:32.735 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:44:32.735481 | orchestrator | 2026-01-09 01:44:32.735 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:44:32.735991 | orchestrator | 2026-01-09 01:44:32.736 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:44:32.736026 | orchestrator | 2026-01-09 01:44:32.736 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:44:32.736046 | orchestrator | 2026-01-09 01:44:32.736 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:44:32.736075 | orchestrator | 2026-01-09 01:44:32.736 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:32.736368 | orchestrator | 2026-01-09 01:44:32.736 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:44:32.736396 | orchestrator | 2026-01-09 01:44:32.736 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:44:32.736417 | orchestrator | 2026-01-09 01:44:32.736 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:44:32.736435 | orchestrator | 2026-01-09 01:44:32.736 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:44:32.736453 | orchestrator | 2026-01-09 01:44:32.737 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:44:32.736634 | orchestrator | 2026-01-09 01:44:32.737 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:44:32.736663 | orchestrator | 2026-01-09 01:44:32.737 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:44:43.893997 | orchestrator | 2026-01-09 01:44:43.895530 | orchestrator | ========================= 2026-01-09 01:44:43.895577 | orchestrator | Failures during discovery 2026-01-09 01:44:43.895584 | orchestrator | ========================= 2026-01-09 01:44:43.895590 | orchestrator | --- stdout --- 2026-01-09 01:44:43.895598 | orchestrator | 2026-01-09 01:44:36.227 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-09 01:44:43.895606 | orchestrator | 2026-01-09 01:44:36.229 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:44:43.895615 | orchestrator | 2026-01-09 01:44:36.229 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:44:43.895621 | orchestrator | 2026-01-09 01:44:36.229 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:44:43.895627 | orchestrator | 2026-01-09 01:44:36.229 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:43.895634 | orchestrator | 2026-01-09 01:44:36.230 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:44:43.895639 | orchestrator | 2026-01-09 01:44:36.230 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:44:43.895645 | orchestrator | 2026-01-09 01:44:36.230 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:44:43.895652 | orchestrator | 2026-01-09 01:44:36.230 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:44:43.895658 | orchestrator | 2026-01-09 01:44:36.230 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:44:43.895685 | orchestrator | 2026-01-09 01:44:36.231 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:44:43.895692 | orchestrator | 2026-01-09 01:44:36.231 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:44:43.895697 | orchestrator | 2026-01-09 01:44:36.231 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:44:43.895704 | orchestrator | 2026-01-09 01:44:36.231 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:44:43.895709 | orchestrator | 2026-01-09 01:44:36.231 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:44:43.895715 | orchestrator | 2026-01-09 01:44:36.231 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:43.895723 | orchestrator | 2026-01-09 01:44:36.232 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:44:43.895732 | orchestrator | 2026-01-09 01:44:36.232 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:44:43.895741 | orchestrator | 2026-01-09 01:44:36.232 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:44:43.895769 | orchestrator | 2026-01-09 01:44:36.232 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:44:43.895779 | orchestrator | 2026-01-09 01:44:36.232 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:44:43.895787 | orchestrator | 2026-01-09 01:44:36.232 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:44:43.895820 | orchestrator | 2026-01-09 01:44:36.232 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:44:43.895853 | orchestrator | 2026-01-09 01:44:36.234 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-09 01:44:43.895885 | orchestrator | 2026-01-09 01:44:37.048 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-09 01:44:43.895896 | orchestrator | 2026-01-09 01:44:37.049 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-09 01:44:43.895906 | orchestrator | 2026-01-09 01:44:37.049 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-09 01:44:43.895915 | orchestrator | 2026-01-09 01:44:37.049 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:43.895959 | orchestrator | 2026-01-09 01:44:37.049 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-09 01:44:43.895972 | orchestrator | 2026-01-09 01:44:37.049 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-09 01:44:43.895982 | orchestrator | 2026-01-09 01:44:37.049 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-09 01:44:43.895992 | orchestrator | 2026-01-09 01:44:37.049 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-09 01:44:43.896001 | orchestrator | 2026-01-09 01:44:37.049 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-09 01:44:43.896011 | orchestrator | 2026-01-09 01:44:37.049 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-09 01:44:43.896021 | orchestrator | 2026-01-09 01:44:37.049 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-09 01:44:43.896031 | orchestrator | --- import errors --- 2026-01-09 01:44:43.896041 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-09 01:44:43.896050 | orchestrator | Traceback (most recent call last): 2026-01-09 01:44:43.896060 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-09 01:44:43.896069 | orchestrator | module = self._get_module_from_name(name) 2026-01-09 01:44:43.896080 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-09 01:44:43.896089 | orchestrator | __import__(name) 2026-01-09 01:44:43.896099 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-09 01:44:43.896109 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-09 01:44:43.896119 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-09 01:44:43.896128 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-09 01:44:43.896138 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-09 01:44:43.896148 | orchestrator | 2026-01-09 01:44:43.896154 | orchestrator | ================================================================================ 2026-01-09 01:44:43.896160 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-09 01:44:44.183213 | orchestrator | 2026-01-09 01:44:44.183287 | orchestrator | ## DNS (API) 2026-01-09 01:44:44.183293 | orchestrator | 2026-01-09 01:44:44.183298 | orchestrator | + echo 2026-01-09 01:44:44.183302 | orchestrator | + echo '## DNS (API)' 2026-01-09 01:44:44.183307 | orchestrator | + echo 2026-01-09 01:44:44.183312 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-01-09 01:44:44.183318 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-01-09 01:44:44.183629 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-01-09 01:44:44.185187 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-09 01:44:44.189012 | orchestrator | + tee -a /opt/tempest/20260109-0144.log 2026-01-09 01:44:47.716588 | orchestrator | 2026-01-09 01:44:47.714 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-09 01:44:47.817038 | orchestrator | 2026-01-09 01:44:47.816 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:44:47.817115 | orchestrator | 2026-01-09 01:44:47.816 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:44:47.817123 | orchestrator | 2026-01-09 01:44:47.816 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:44:47.817131 | orchestrator | 2026-01-09 01:44:47.816 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:47.817300 | orchestrator | 2026-01-09 01:44:47.817 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:44:47.817312 | orchestrator | 2026-01-09 01:44:47.817 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:44:47.817316 | orchestrator | 2026-01-09 01:44:47.817 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:44:47.817414 | orchestrator | 2026-01-09 01:44:47.817 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:44:47.817421 | orchestrator | 2026-01-09 01:44:47.817 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:44:47.818702 | orchestrator | 2026-01-09 01:44:47.818 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:44:47.818741 | orchestrator | 2026-01-09 01:44:47.818 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:44:47.819057 | orchestrator | 2026-01-09 01:44:47.818 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:44:47.819083 | orchestrator | 2026-01-09 01:44:47.818 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:44:47.819088 | orchestrator | 2026-01-09 01:44:47.819 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:44:47.819092 | orchestrator | 2026-01-09 01:44:47.819 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:44:47.819100 | orchestrator | 2026-01-09 01:44:47.819 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:44:47.819321 | orchestrator | 2026-01-09 01:44:47.819 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:44:47.819340 | orchestrator | 2026-01-09 01:44:47.819 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:44:47.819347 | orchestrator | 2026-01-09 01:44:47.819 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:44:47.819354 | orchestrator | 2026-01-09 01:44:47.819 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:44:47.819361 | orchestrator | 2026-01-09 01:44:47.819 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:44:47.819367 | orchestrator | 2026-01-09 01:44:47.819 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:45:01.688373 | orchestrator | 2026-01-09 01:45:01.688447 | orchestrator | ========================= 2026-01-09 01:45:01.688457 | orchestrator | Failures during discovery 2026-01-09 01:45:01.688463 | orchestrator | ========================= 2026-01-09 01:45:01.688469 | orchestrator | --- stdout --- 2026-01-09 01:45:01.688476 | orchestrator | 2026-01-09 01:44:51.356 9 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-09 01:45:01.688483 | orchestrator | 2026-01-09 01:44:51.358 9 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:45:01.688491 | orchestrator | 2026-01-09 01:44:51.358 9 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:45:01.688497 | orchestrator | 2026-01-09 01:44:51.358 9 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:45:01.688503 | orchestrator | 2026-01-09 01:44:51.358 9 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:45:01.688508 | orchestrator | 2026-01-09 01:44:51.359 9 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:45:01.688514 | orchestrator | 2026-01-09 01:44:51.359 9 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:45:01.688520 | orchestrator | 2026-01-09 01:44:51.359 9 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:45:01.688525 | orchestrator | 2026-01-09 01:44:51.359 9 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:45:01.688530 | orchestrator | 2026-01-09 01:44:51.359 9 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:45:01.688536 | orchestrator | 2026-01-09 01:44:51.360 9 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:45:01.688541 | orchestrator | 2026-01-09 01:44:51.360 9 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:45:01.688547 | orchestrator | 2026-01-09 01:44:51.361 9 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:45:01.688552 | orchestrator | 2026-01-09 01:44:51.361 9 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:45:01.688558 | orchestrator | 2026-01-09 01:44:51.361 9 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:45:01.688563 | orchestrator | 2026-01-09 01:44:51.361 9 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:45:01.688570 | orchestrator | 2026-01-09 01:44:51.361 9 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:45:01.688575 | orchestrator | 2026-01-09 01:44:51.361 9 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:45:01.688581 | orchestrator | 2026-01-09 01:44:51.361 9 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:45:01.688586 | orchestrator | 2026-01-09 01:44:51.361 9 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:45:01.688592 | orchestrator | 2026-01-09 01:44:51.361 9 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:45:01.688597 | orchestrator | 2026-01-09 01:44:51.361 9 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:45:01.688603 | orchestrator | 2026-01-09 01:44:51.361 9 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:45:01.688652 | orchestrator | 2026-01-09 01:44:51.364 9 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-09 01:45:01.688687 | orchestrator | 2026-01-09 01:44:52.176 9 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-09 01:45:01.688698 | orchestrator | 2026-01-09 01:44:52.176 9 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-09 01:45:01.688712 | orchestrator | 2026-01-09 01:44:52.176 9 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-09 01:45:01.688723 | orchestrator | 2026-01-09 01:44:52.176 9 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:45:01.688732 | orchestrator | 2026-01-09 01:44:52.177 9 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-09 01:45:01.688755 | orchestrator | 2026-01-09 01:44:52.177 9 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-09 01:45:01.688764 | orchestrator | 2026-01-09 01:44:52.177 9 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-09 01:45:01.688772 | orchestrator | 2026-01-09 01:44:52.177 9 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-09 01:45:01.688780 | orchestrator | 2026-01-09 01:44:52.177 9 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-09 01:45:01.688812 | orchestrator | 2026-01-09 01:44:52.177 9 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-09 01:45:01.688822 | orchestrator | 2026-01-09 01:44:52.177 9 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-09 01:45:01.688831 | orchestrator | --- import errors --- 2026-01-09 01:45:01.688841 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-09 01:45:01.688849 | orchestrator | Traceback (most recent call last): 2026-01-09 01:45:01.688860 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-09 01:45:01.688868 | orchestrator | module = self._get_module_from_name(name) 2026-01-09 01:45:01.688878 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-09 01:45:01.688888 | orchestrator | __import__(name) 2026-01-09 01:45:01.688897 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-09 01:45:01.688907 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-09 01:45:01.688916 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-09 01:45:01.688926 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-09 01:45:01.688934 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-09 01:45:01.688945 | orchestrator | 2026-01-09 01:45:01.688951 | orchestrator | ================================================================================ 2026-01-09 01:45:01.688958 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-09 01:45:02.185311 | orchestrator | 2026-01-09 01:45:02.185382 | orchestrator | ## OBJECT-STORE (API) 2026-01-09 01:45:02.185389 | orchestrator | 2026-01-09 01:45:02.185394 | orchestrator | + echo 2026-01-09 01:45:02.185399 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-01-09 01:45:02.185404 | orchestrator | + echo 2026-01-09 01:45:02.185409 | orchestrator | + _tempest tempest.api.object_storage 2026-01-09 01:45:02.185415 | orchestrator | + local regex=tempest.api.object_storage 2026-01-09 01:45:02.186324 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-01-09 01:45:02.186370 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-09 01:45:02.188534 | orchestrator | + tee -a /opt/tempest/20260109-0145.log 2026-01-09 01:45:06.065366 | orchestrator | 2026-01-09 01:45:06.064 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-09 01:45:06.187285 | orchestrator | 2026-01-09 01:45:06.186 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:45:06.187371 | orchestrator | 2026-01-09 01:45:06.186 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:45:06.187380 | orchestrator | 2026-01-09 01:45:06.186 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:45:06.187386 | orchestrator | 2026-01-09 01:45:06.187 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:45:06.187392 | orchestrator | 2026-01-09 01:45:06.187 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:45:06.187398 | orchestrator | 2026-01-09 01:45:06.187 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:45:06.187405 | orchestrator | 2026-01-09 01:45:06.188 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:45:06.187695 | orchestrator | 2026-01-09 01:45:06.188 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:45:06.187717 | orchestrator | 2026-01-09 01:45:06.188 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:45:06.188208 | orchestrator | 2026-01-09 01:45:06.188 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:45:06.188274 | orchestrator | 2026-01-09 01:45:06.189 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:45:06.189016 | orchestrator | 2026-01-09 01:45:06.189 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:45:06.189057 | orchestrator | 2026-01-09 01:45:06.189 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:45:06.189103 | orchestrator | 2026-01-09 01:45:06.189 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:45:06.189334 | orchestrator | 2026-01-09 01:45:06.189 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:45:06.189349 | orchestrator | 2026-01-09 01:45:06.190 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:45:06.189355 | orchestrator | 2026-01-09 01:45:06.190 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:45:06.189362 | orchestrator | 2026-01-09 01:45:06.190 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:45:06.189458 | orchestrator | 2026-01-09 01:45:06.190 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:45:06.189733 | orchestrator | 2026-01-09 01:45:06.190 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:45:06.189750 | orchestrator | 2026-01-09 01:45:06.190 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:45:06.189756 | orchestrator | 2026-01-09 01:45:06.190 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:45:18.905977 | orchestrator | 2026-01-09 01:45:18.906094 | orchestrator | ========================= 2026-01-09 01:45:18.906105 | orchestrator | Failures during discovery 2026-01-09 01:45:18.906112 | orchestrator | ========================= 2026-01-09 01:45:18.906118 | orchestrator | --- stdout --- 2026-01-09 01:45:18.906125 | orchestrator | 2026-01-09 01:45:09.727 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-09 01:45:18.907643 | orchestrator | 2026-01-09 01:45:09.729 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-09 01:45:18.907680 | orchestrator | 2026-01-09 01:45:09.729 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-09 01:45:18.907686 | orchestrator | 2026-01-09 01:45:09.729 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-09 01:45:18.907692 | orchestrator | 2026-01-09 01:45:09.729 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:45:18.907698 | orchestrator | 2026-01-09 01:45:09.730 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-09 01:45:18.907704 | orchestrator | 2026-01-09 01:45:09.730 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-09 01:45:18.907709 | orchestrator | 2026-01-09 01:45:09.730 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-09 01:45:18.907715 | orchestrator | 2026-01-09 01:45:09.730 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-09 01:45:18.907720 | orchestrator | 2026-01-09 01:45:09.730 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-09 01:45:18.907726 | orchestrator | 2026-01-09 01:45:09.731 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-09 01:45:18.907731 | orchestrator | 2026-01-09 01:45:09.731 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-09 01:45:18.907737 | orchestrator | 2026-01-09 01:45:09.731 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-09 01:45:18.907742 | orchestrator | 2026-01-09 01:45:09.731 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-09 01:45:18.907748 | orchestrator | 2026-01-09 01:45:09.731 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-09 01:45:18.907754 | orchestrator | 2026-01-09 01:45:09.732 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-09 01:45:18.907761 | orchestrator | 2026-01-09 01:45:09.732 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-09 01:45:18.907767 | orchestrator | 2026-01-09 01:45:09.732 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-09 01:45:18.907772 | orchestrator | 2026-01-09 01:45:09.732 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-09 01:45:18.907777 | orchestrator | 2026-01-09 01:45:09.732 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-09 01:45:18.907783 | orchestrator | 2026-01-09 01:45:09.732 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-09 01:45:18.907788 | orchestrator | 2026-01-09 01:45:09.732 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-09 01:45:18.907794 | orchestrator | 2026-01-09 01:45:09.732 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-09 01:45:18.907802 | orchestrator | 2026-01-09 01:45:09.734 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-09 01:45:18.907809 | orchestrator | 2026-01-09 01:45:10.594 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-09 01:45:18.907825 | orchestrator | 2026-01-09 01:45:10.594 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-09 01:45:18.907854 | orchestrator | 2026-01-09 01:45:10.594 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-09 01:45:18.907862 | orchestrator | 2026-01-09 01:45:10.594 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-09 01:45:18.907904 | orchestrator | 2026-01-09 01:45:10.594 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-09 01:45:18.907910 | orchestrator | 2026-01-09 01:45:10.594 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-09 01:45:18.907916 | orchestrator | 2026-01-09 01:45:10.594 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-09 01:45:18.907921 | orchestrator | 2026-01-09 01:45:10.594 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-09 01:45:18.907926 | orchestrator | 2026-01-09 01:45:10.594 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-09 01:45:18.907932 | orchestrator | 2026-01-09 01:45:10.595 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-09 01:45:18.907937 | orchestrator | 2026-01-09 01:45:10.595 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-09 01:45:18.907943 | orchestrator | --- import errors --- 2026-01-09 01:45:18.907950 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-09 01:45:18.907955 | orchestrator | Traceback (most recent call last): 2026-01-09 01:45:18.907975 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-09 01:45:18.907981 | orchestrator | module = self._get_module_from_name(name) 2026-01-09 01:45:18.907987 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-09 01:45:18.907992 | orchestrator | __import__(name) 2026-01-09 01:45:18.907998 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-09 01:45:18.908003 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-09 01:45:18.908009 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-09 01:45:18.908014 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-09 01:45:18.908020 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-09 01:45:18.908026 | orchestrator | 2026-01-09 01:45:18.908031 | orchestrator | ================================================================================ 2026-01-09 01:45:18.908037 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-09 01:45:19.596987 | orchestrator | ok: Runtime: 0:03:37.999440 2026-01-09 01:45:19.639264 | 2026-01-09 01:45:19.639563 | TASK [Check prometheus alert status] 2026-01-09 01:45:20.184684 | orchestrator | skipping: Conditional result was False 2026-01-09 01:45:20.189174 | 2026-01-09 01:45:20.189383 | PLAY RECAP 2026-01-09 01:45:20.189517 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-01-09 01:45:20.189568 | 2026-01-09 01:45:20.467048 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-09 01:45:20.469145 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-09 01:45:21.285425 | 2026-01-09 01:45:21.285604 | PLAY [Post output play] 2026-01-09 01:45:21.302737 | 2026-01-09 01:45:21.302924 | LOOP [stage-output : Register sources] 2026-01-09 01:45:21.374894 | 2026-01-09 01:45:21.375257 | TASK [stage-output : Check sudo] 2026-01-09 01:45:22.359355 | orchestrator | sudo: a password is required 2026-01-09 01:45:22.418342 | orchestrator | ok: Runtime: 0:00:00.016420 2026-01-09 01:45:22.434240 | 2026-01-09 01:45:22.434484 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-09 01:45:22.479378 | 2026-01-09 01:45:22.479729 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-09 01:45:22.559967 | orchestrator | ok 2026-01-09 01:45:22.569460 | 2026-01-09 01:45:22.569635 | LOOP [stage-output : Ensure target folders exist] 2026-01-09 01:45:23.063977 | orchestrator | ok: "docs" 2026-01-09 01:45:23.064254 | 2026-01-09 01:45:23.332663 | orchestrator | ok: "artifacts" 2026-01-09 01:45:23.630916 | orchestrator | ok: "logs" 2026-01-09 01:45:23.657424 | 2026-01-09 01:45:23.657684 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-09 01:45:23.709962 | 2026-01-09 01:45:23.710315 | TASK [stage-output : Make all log files readable] 2026-01-09 01:45:24.026989 | orchestrator | ok 2026-01-09 01:45:24.037261 | 2026-01-09 01:45:24.037499 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-09 01:45:24.073636 | orchestrator | skipping: Conditional result was False 2026-01-09 01:45:24.091792 | 2026-01-09 01:45:24.091967 | TASK [stage-output : Discover log files for compression] 2026-01-09 01:45:24.127121 | orchestrator | skipping: Conditional result was False 2026-01-09 01:45:24.144338 | 2026-01-09 01:45:24.144556 | LOOP [stage-output : Archive everything from logs] 2026-01-09 01:45:24.197517 | 2026-01-09 01:45:24.197777 | PLAY [Post cleanup play] 2026-01-09 01:45:24.210200 | 2026-01-09 01:45:24.210380 | TASK [Set cloud fact (Zuul deployment)] 2026-01-09 01:45:24.276727 | orchestrator | ok 2026-01-09 01:45:24.289547 | 2026-01-09 01:45:24.289701 | TASK [Set cloud fact (local deployment)] 2026-01-09 01:45:24.334927 | orchestrator | skipping: Conditional result was False 2026-01-09 01:45:24.369079 | 2026-01-09 01:45:24.369366 | TASK [Clean the cloud environment] 2026-01-09 01:45:25.611908 | orchestrator | 2026-01-09 01:45:25 - clean up servers 2026-01-09 01:45:26.488337 | orchestrator | 2026-01-09 01:45:26 - testbed-manager 2026-01-09 01:45:26.578545 | orchestrator | 2026-01-09 01:45:26 - testbed-node-5 2026-01-09 01:45:26.663684 | orchestrator | 2026-01-09 01:45:26 - testbed-node-4 2026-01-09 01:45:26.748412 | orchestrator | 2026-01-09 01:45:26 - testbed-node-2 2026-01-09 01:45:26.839962 | orchestrator | 2026-01-09 01:45:26 - testbed-node-3 2026-01-09 01:45:26.929179 | orchestrator | 2026-01-09 01:45:26 - testbed-node-0 2026-01-09 01:45:27.018907 | orchestrator | 2026-01-09 01:45:27 - testbed-node-1 2026-01-09 01:45:27.107521 | orchestrator | 2026-01-09 01:45:27 - clean up keypairs 2026-01-09 01:45:27.126535 | orchestrator | 2026-01-09 01:45:27 - testbed 2026-01-09 01:45:27.153272 | orchestrator | 2026-01-09 01:45:27 - wait for servers to be gone 2026-01-09 01:45:38.205488 | orchestrator | 2026-01-09 01:45:38 - clean up ports 2026-01-09 01:45:38.387908 | orchestrator | 2026-01-09 01:45:38 - 1241fa17-64dd-4859-89f2-11160ebbd85b 2026-01-09 01:45:38.652721 | orchestrator | 2026-01-09 01:45:38 - 60e17e8d-73a5-4118-8b45-3c131401ba30 2026-01-09 01:45:38.925951 | orchestrator | 2026-01-09 01:45:38 - 6bd89052-a490-483b-b920-d44243de545f 2026-01-09 01:45:39.193438 | orchestrator | 2026-01-09 01:45:39 - 6df6a43e-aefb-40fd-b47f-702547e5f797 2026-01-09 01:45:39.438280 | orchestrator | 2026-01-09 01:45:39 - bf694a19-b98e-45f1-80b6-5cc43b908624 2026-01-09 01:45:40.351422 | orchestrator | 2026-01-09 01:45:40 - f35b8bfd-48e0-4aa2-8158-289afbb4e3d1 2026-01-09 01:45:40.560175 | orchestrator | 2026-01-09 01:45:40 - feac9d6f-2a35-40b1-a1ba-588924d0c3cf 2026-01-09 01:45:40.771072 | orchestrator | 2026-01-09 01:45:40 - clean up volumes 2026-01-09 01:45:40.889716 | orchestrator | 2026-01-09 01:45:40 - testbed-volume-0-node-base 2026-01-09 01:45:40.932979 | orchestrator | 2026-01-09 01:45:40 - testbed-volume-manager-base 2026-01-09 01:45:40.974285 | orchestrator | 2026-01-09 01:45:40 - testbed-volume-3-node-base 2026-01-09 01:45:41.020675 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-2-node-base 2026-01-09 01:45:41.063956 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-5-node-base 2026-01-09 01:45:41.109263 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-1-node-base 2026-01-09 01:45:41.154504 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-4-node-base 2026-01-09 01:45:41.196454 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-5-node-5 2026-01-09 01:45:41.241280 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-2-node-5 2026-01-09 01:45:41.285764 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-1-node-4 2026-01-09 01:45:41.329961 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-4-node-4 2026-01-09 01:45:41.370120 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-0-node-3 2026-01-09 01:45:41.412568 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-7-node-4 2026-01-09 01:45:41.457203 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-6-node-3 2026-01-09 01:45:41.501889 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-8-node-5 2026-01-09 01:45:41.541703 | orchestrator | 2026-01-09 01:45:41 - testbed-volume-3-node-3 2026-01-09 01:45:41.583641 | orchestrator | 2026-01-09 01:45:41 - disconnect routers 2026-01-09 01:45:41.649128 | orchestrator | 2026-01-09 01:45:41 - testbed 2026-01-09 01:45:42.502741 | orchestrator | 2026-01-09 01:45:42 - clean up subnets 2026-01-09 01:45:42.543039 | orchestrator | 2026-01-09 01:45:42 - subnet-testbed-management 2026-01-09 01:45:42.709818 | orchestrator | 2026-01-09 01:45:42 - clean up networks 2026-01-09 01:45:42.847220 | orchestrator | 2026-01-09 01:45:42 - net-testbed-management 2026-01-09 01:45:43.126555 | orchestrator | 2026-01-09 01:45:43 - clean up security groups 2026-01-09 01:45:43.170522 | orchestrator | 2026-01-09 01:45:43 - testbed-node 2026-01-09 01:45:43.284105 | orchestrator | 2026-01-09 01:45:43 - testbed-management 2026-01-09 01:45:43.404608 | orchestrator | 2026-01-09 01:45:43 - clean up floating ips 2026-01-09 01:45:43.441719 | orchestrator | 2026-01-09 01:45:43 - 81.163.192.67 2026-01-09 01:45:43.813411 | orchestrator | 2026-01-09 01:45:43 - clean up routers 2026-01-09 01:45:43.916159 | orchestrator | 2026-01-09 01:45:43 - testbed 2026-01-09 01:45:44.945504 | orchestrator | ok: Runtime: 0:00:19.992970 2026-01-09 01:45:44.949878 | 2026-01-09 01:45:44.950049 | PLAY RECAP 2026-01-09 01:45:44.950155 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-09 01:45:44.950207 | 2026-01-09 01:45:45.107538 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-09 01:45:45.111552 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-09 01:45:45.953914 | 2026-01-09 01:45:45.954094 | PLAY [Cleanup play] 2026-01-09 01:45:45.972127 | 2026-01-09 01:45:45.972313 | TASK [Set cloud fact (Zuul deployment)] 2026-01-09 01:45:46.015045 | orchestrator | ok 2026-01-09 01:45:46.022670 | 2026-01-09 01:45:46.022862 | TASK [Set cloud fact (local deployment)] 2026-01-09 01:45:46.047940 | orchestrator | skipping: Conditional result was False 2026-01-09 01:45:46.067999 | 2026-01-09 01:45:46.068182 | TASK [Clean the cloud environment] 2026-01-09 01:45:47.419305 | orchestrator | 2026-01-09 01:45:47 - clean up servers 2026-01-09 01:45:48.008346 | orchestrator | 2026-01-09 01:45:48 - clean up keypairs 2026-01-09 01:45:48.028860 | orchestrator | 2026-01-09 01:45:48 - wait for servers to be gone 2026-01-09 01:45:48.080176 | orchestrator | 2026-01-09 01:45:48 - clean up ports 2026-01-09 01:45:48.157221 | orchestrator | 2026-01-09 01:45:48 - clean up volumes 2026-01-09 01:45:48.225723 | orchestrator | 2026-01-09 01:45:48 - disconnect routers 2026-01-09 01:45:48.252537 | orchestrator | 2026-01-09 01:45:48 - clean up subnets 2026-01-09 01:45:48.273399 | orchestrator | 2026-01-09 01:45:48 - clean up networks 2026-01-09 01:45:48.402728 | orchestrator | 2026-01-09 01:45:48 - clean up security groups 2026-01-09 01:45:48.437740 | orchestrator | 2026-01-09 01:45:48 - clean up floating ips 2026-01-09 01:45:48.464492 | orchestrator | 2026-01-09 01:45:48 - clean up routers 2026-01-09 01:45:48.652205 | orchestrator | ok: Runtime: 0:00:01.519653 2026-01-09 01:45:48.654540 | 2026-01-09 01:45:48.654659 | PLAY RECAP 2026-01-09 01:45:48.654733 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-09 01:45:48.654770 | 2026-01-09 01:45:48.809239 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-09 01:45:48.812412 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-09 01:45:49.618108 | 2026-01-09 01:45:49.618286 | PLAY [Base post-fetch] 2026-01-09 01:45:49.635361 | 2026-01-09 01:45:49.635533 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-09 01:45:49.701588 | orchestrator | skipping: Conditional result was False 2026-01-09 01:45:49.715833 | 2026-01-09 01:45:49.716081 | TASK [fetch-output : Set log path for single node] 2026-01-09 01:45:49.767203 | orchestrator | ok 2026-01-09 01:45:49.776062 | 2026-01-09 01:45:49.776229 | LOOP [fetch-output : Ensure local output dirs] 2026-01-09 01:45:50.310794 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/80bff113f6db4f77b7b58d76c24d2a8f/work/logs" 2026-01-09 01:45:50.600780 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/80bff113f6db4f77b7b58d76c24d2a8f/work/artifacts" 2026-01-09 01:45:50.895540 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/80bff113f6db4f77b7b58d76c24d2a8f/work/docs" 2026-01-09 01:45:50.921205 | 2026-01-09 01:45:50.921438 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-09 01:45:51.927146 | orchestrator | changed: .d..t...... ./ 2026-01-09 01:45:51.927542 | orchestrator | changed: All items complete 2026-01-09 01:45:51.927603 | 2026-01-09 01:45:52.670673 | orchestrator | changed: .d..t...... ./ 2026-01-09 01:45:53.438795 | orchestrator | changed: .d..t...... ./ 2026-01-09 01:45:53.469500 | 2026-01-09 01:45:53.469664 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-09 01:45:53.511736 | orchestrator | skipping: Conditional result was False 2026-01-09 01:45:53.514103 | orchestrator | skipping: Conditional result was False 2026-01-09 01:45:53.533082 | 2026-01-09 01:45:53.533209 | PLAY RECAP 2026-01-09 01:45:53.533290 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-09 01:45:53.533396 | 2026-01-09 01:45:53.689272 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-09 01:45:53.691453 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-09 01:45:54.515738 | 2026-01-09 01:45:54.515928 | PLAY [Base post] 2026-01-09 01:45:54.532175 | 2026-01-09 01:45:54.532396 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-09 01:45:55.624939 | orchestrator | changed 2026-01-09 01:45:55.636099 | 2026-01-09 01:45:55.636268 | PLAY RECAP 2026-01-09 01:45:55.636371 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-09 01:45:55.636452 | 2026-01-09 01:45:55.771626 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-09 01:45:55.775925 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-09 01:45:56.648948 | 2026-01-09 01:45:56.649130 | PLAY [Base post-logs] 2026-01-09 01:45:56.660783 | 2026-01-09 01:45:56.660952 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-09 01:45:57.164831 | localhost | changed 2026-01-09 01:45:57.175396 | 2026-01-09 01:45:57.175565 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-09 01:45:57.212156 | localhost | ok 2026-01-09 01:45:57.216927 | 2026-01-09 01:45:57.217069 | TASK [Set zuul-log-path fact] 2026-01-09 01:45:57.233573 | localhost | ok 2026-01-09 01:45:57.245063 | 2026-01-09 01:45:57.245205 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-09 01:45:57.272069 | localhost | ok 2026-01-09 01:45:57.276853 | 2026-01-09 01:45:57.277001 | TASK [upload-logs : Create log directories] 2026-01-09 01:45:57.825563 | localhost | changed 2026-01-09 01:45:57.828623 | 2026-01-09 01:45:57.828740 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-09 01:45:58.361928 | localhost -> localhost | ok: Runtime: 0:00:00.007818 2026-01-09 01:45:58.366533 | 2026-01-09 01:45:58.366666 | TASK [upload-logs : Upload logs to log server] 2026-01-09 01:45:58.953401 | localhost | Output suppressed because no_log was given 2026-01-09 01:45:58.957796 | 2026-01-09 01:45:58.957993 | LOOP [upload-logs : Compress console log and json output] 2026-01-09 01:45:59.023653 | localhost | skipping: Conditional result was False 2026-01-09 01:45:59.029922 | localhost | skipping: Conditional result was False 2026-01-09 01:45:59.041250 | 2026-01-09 01:45:59.041514 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-09 01:45:59.105445 | localhost | skipping: Conditional result was False 2026-01-09 01:45:59.106272 | 2026-01-09 01:45:59.111331 | localhost | skipping: Conditional result was False 2026-01-09 01:45:59.123772 | 2026-01-09 01:45:59.123999 | LOOP [upload-logs : Upload console log and json output]